Tequila Fish Ran-dumb ramblings of me…

26Nov/140

Elasticsearch 1.4.0: Marvel Sense fails with “Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://www.server.com:9200/?_=1417024257696. This can be fixed by moving the resource to the same domain or enabling CORS.”

After upgrading an Elasticsearch cluster from v1.3.2 to v1.4.0, using Marvel Sense to run queries against the server would fail with:

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://www.server.com:9200/?_=1417024257696. This can be fixed by moving the resource to the same domain or enabling CORS.

This is because ES 1.4.0 now disables CORS by default in order to patch a security vulnerability. To enable CORS, simply add the following to your elasticsearch.yml file:

http:
  cors:
    enabled: true

You can lock it down further by setting specific origins (such as localhost):

http:
  cors:
    allow-origin: /https?:\/\/localhost(:[0-9]+)?/

There are plenty of other options for CORS in Elasticsearch, you can read about them at http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-http.html#_settings_2

13Nov/140

Spotlight hung on OS X Yosemite with 100% CPU usage

I recently upgraded from OS X Mavericks to Yosemite, and all went well except for one thing: The mds processes were constantly taking 100% CPU, resulting in decreased system performance and decreased battery life. Now I've been through this dance before, knowing full well that every OS X upgrade results in a Spotlight re-index to upgrade the Spotlight DB and pick up any new files. But in the past this re-index had taken at most an hour or two on my roughly 80% full 750GB SSD drive. But this time was different -- Spotlight was running not for hours, but for days. I decided to let Spotlight run thinking that maybe things were different with Yosemite. After all, Spotlight was one of the big new changes highlighted by Apple in their Yosemite reveal. But after 4 days of 24x7 indexing, I decided that enough was enough and started to investigate.

There are a million different things that can go wrong with Spotlight, and plenty of articles out there on how to fix them. Some of the things I tried were:

  • Verify Disk & Verify Disk Permissions in Disk Utility
  • Fiddle with the mdutil command to disable/re-enable indexing
  • Manually removing the /.Spotlight-V100 directory so that it would be re-created
  • Adding / to Spotlight's Privacy tab and then removing it to force a re-index.

I tried them all, and none worked.

The symptoms were simple: the mds process was constantly taking 100% CPU. I had a sneaking suspicion that it was hitting a file that it didn't know what to do with and would just get stuck and not proceed. I was right. Using lsof I checked what files the mds process had open:

sudo lsof -c '/mds$/'

Amongst all the various /System/Library, /.Spotlight-V100, and /Volumes files and directories, I noticed a file that stuck out like a sore thumb:

/Users/MyName/Documents/Subfolder/SomeApp.app/Contents/MacOS/Flash Player

It was actually listed twice in the lsof output. Weird, lets go check it out in Finder. Sure enough, trying to navigate there in Finder caused Finder itself to freeze up at 100% CPU usage. So I restarted Finder with Option-RightClick-Relaunch from the Dock and manually removed the file from Terminal. I then re-started Spotlight and told it to re-index from scratch:

sudo rm -rf /Users/MyName/Documents/Subfolder/SomeApp.app
sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.metadata.mds.plist
sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.metadata.mds.plist
sudo mdutil -i on -E /

This time I could see progress in the Spotlight window's progress bar, and see files in the Spotlight DB on disk being changed and the total size increasing. About 45 minutes later Spotlight had indexed all 580GB of data. FIXED!

I have no idea why both Spotlight and Finder has trouble with this file. It was never an issue on previous versions of OS X. It was an old custom app built by a friend of mine in 2010. And while mds was stuck on the Flash Player within it, Finder crashed when navigating to directory containing the app itself. Weird.

Some helpful commands for when dealing with Spotlight:

Stop Spotlight:
sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.metadata.mds.plist

Start Spotlight:
sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.metadata.mds.plist

Get the size of the Spotlight Database:
sudo du -m /.Spotlight-V100

Get a list of files opened by the MDS process:
lsof -c '/mds$/'

UPDATE

Turns out this troublesome file made its way onto my Time Machine backup as well, causing the *exact same issue* in Time Machine. Backups would start and get stuck on this file, causing the backupd process take up 100% CPU forever until I manually killed the backup. So if your Time Machine is exhibiting similar behaviors as described above, find out what file(s) it is barfing on with:

sudo lsof -c backupd
or
sudo lsof -p [PID]

Then use Time Machines "Delete All Backups Of..." functionality to remove it. Kick off another run of Time Machine and it should make it all the way through.

If an Apple Engineer comes across this blogpost and would like a copy of the troublesome file, hit me up.

26Jan/12Off

Installing Bundler for Ruby in Ubuntu 10.04

When attempting to install Bundler for Ruby on Ubuntu 10.04, I got the following error:

shell> sudo gem install bundler
ERROR: Error installing bundler:
bundler requires RubyGems version >= 1.3.6

Running sudo gem -v I saw that I had 1.3.5. To get around this, simply install the available updater gem, then run it:

shell> sudo gem install rubygems-update
shell> sudo /var/lib/gems/1.8/bin/update_rubygems

Now running gem -v I see that I have 1.8.15 and I am able to install bundler:

shell> gem install bundler
Fetching: bundler-1.0.21.gem (100%)
Successfully installed bundler-1.0.21
1 gem installed
Installing ri documentation for bundler-1.0.21...
Installing RDoc documentation for bundler-1.0.21...

26Jun/11Off

Unix: Delete all but N most recent files in a directory

Here's a handy little command to delete every file in a directory except for the N most recent files. It is helpful for including in a log rotation or db backup script.

find /path/to/files/ -maxdepth 1 -type -f -name '*' -print0 | xargs -r0 ls -t | tail -n +5 | tr '\n' '\0' | xargs -r0 rm

Breaking down and explaining each section of the command, we have:

find /path/to/files/ -maxdepth 1 -type -f -name '*' -print0

This will list all files in the specified directory. The reason we use 'find' rather than 'ls' is because we need the full path of the files when later passing the argument list to the 'rm' command. We specify a 'maxdepth' so we only search within the current directory. We also specify a 'type' of 'f' so that we only find files and not other items like directories, sockets, or symbolic links. It's probably best not to use '*' as your 'find' expression, as it could be dangerous if you accidentally point it to the wrong directory. Use something like '*.sql.gz' or '*.log' or whatever suits you. Also note that we are using '-print0' so that the subsequent commands will be able to handle spaces and other special characters in filenames (see http://en.wikipedia.org/wiki/Xargs#The_separator_problem)

xargs -r0 ls -t

This will sort the list returned by the 'find' command in descending order by timestamp (newest to oldest). The '-r' parameter instructs xargs not to run of no files are found by the first 'find' command. The '-0' parameter gets around the separator problem described in the previous step.

tail -n +5

This will filter the list to only show from the 5th line onward. So if you want to keep the 10 most recent files, use +10 as your argument.

tr '\n' '\0'

This is similar to using '-print0' with the 'find' command above. We need to clean up the output so that the subsequent xargs command can handle potential spaces in the filenames. So we use the 'tr' command to translate newlines to the null character.

xargs -r0 rm

Finally, this command will delete the files returned by the combination of the previous commands.

18Aug/10Off

MySQL: Dynamic ORDER BY clause

When developing some discography software for a website, I came across the need to sort a list of releases differently depending on their release date. I wanted to sort future releases in ascending order (oldest to newest) and sort past releases in descending order (newest to oldest). This isn't an easy task, but after many hours of trial and error, I finally figured it out with some MySQL trickery.

Given the following MySQL Database table named releases:

+-------------+------------------+------+-----+---------+----------------+
| Field       | Type             | Null | Key | Default | Extra          |
+-------------+------------------+------+-----+---------+----------------+
| id          | int(10) unsigned | NO   | PRI | NULL    | auto_increment |
| title       | varchar(256)     | NO   |     | NULL    |                |
| releasedate | date             | NO   |     | NULL    |                |
+-------------+------------------+------+-----+---------+----------------+

I accomplished the dynamic ORDER BY clause by using the following query:

SELECT id, 
       title, 
       releasedate, 
       UNIX_TIMESTAMP(releasedate) AS releasedate_unix, 
       CASE WHEN releasedate > NOW() THEN 0 ELSE 1 END AS futureorpast
FROM releases
ORDER BY futureorpast ASC, 
         CASE WHEN releasedate_unix > UNIX_TIMESTAMP(NOW()) THEN 
            releasedate_unix 
         ELSE 
            (releasedate_unix * -1) 
         END

Now for an explanation. First, we need to "seperate" the results into two sets: future releases and past releases. This is accomplished by calculating the futureorpast column upon which we can sort, so future releases are assigned a value of 0 while past releases are assigned a value of 1. These are then sorted with future releases coming first, because obviously 0 comes before 1.

Next comes the tricky part: we want to sort future releases by ASC but past release by DESC. This is unable to be accomplished using dynamic ASC/DESC in a CASE clause because you can only calculate values using CASE, not clauses. The trick then is to convert the date column into a unix timestamp so that we have an integer, and then take the inverse of that integer on the values we want to sort in reverse.

Using this technique I was able to sort a subset of results in one direction and another subset of the same results in a different direction. I'm not sure if this is the best way to accomplish this, so if anyone has other suggestions on how to accomplish this, I'd love to hear them!

Tagged as: , , , 1 Comment
11Aug/10Off

How to increase bash shell history length in OS X

I do a lot of command line work on my OS X machine, so the history saves me a lot of time running repeat commands and also refreshing my memory on commands that I haven't run in a while. Unfortunately, the default history length in OS X is 500 commands. That seems like a lot, but when you're running 50+ commands a day it can push older commands off the list pretty quickly. This is easily solved by setting the HISTFILESIZE in your .bash_profile file.

First, to find out what HISTFILESIZE is currently set at, run the following command from Terminal:

echo $HISTFILESIZE

To change this value, simply add the following line to your .bash_profile file (found in /Users/yourname/):

HISTFILESIZE=2000

This increases your bash history to 2000 items. It will take effect next time you open a Terminal window.

22Jul/10Off

eAccelerator and open_basedir: open_basedir restriction in effect. File() is not within the allowed path(s):

After installing eAccelerator on a CentOS 5.5 server running PHP 5.2.10, a bunch of websites began failing with open_basedir errors like so:

[Fri Jul 16 17:53:50 2010] [error] [client XX.XX.XXX.XXX] PHP Warning: require() [function.require]: open_basedir restriction in effect. File() is not within the allowed path(s): (/home/username/) in /home/username/public_html/wp-settings.php on line 19, referer: http://www.server.com/

After doing some research it turns out that a default option that is compiled into eAccelerator is incompatible with open_basedir. The fix is easy enough, simply re-compile with the --without-eaccelerator-use-inode option like so:

make clean
phpize
./configure --without-eaccelerator-use-inode
make
make install

After re-compiling and installing, make sure you clear out the existing eAccelerator files before restarting Apache:

rm -rf /var/cache/eaccelerator/*
apachectl restart

eAccelerator states that the next version will have this compile option set by default.

Reference:

Filed under: CentOS, Linux, PHP, Tech 1 Comment