A lightweight web server with Lighttpd, PHP and SQLite3 on a Raspberry Pi

Many consumer devices (set top boxes, routers, media centers) use lightweight embedded web servers as user interface. In this walkthrough, I’ll show you how to install a complete web server in a Raspberry Pi, which is a cheap ARM-based computer that is perfect for prototyping such appliances.

In embedded systems, we aim at minimizing computational expenditure. Therefore, we’ll use Lighttpd as the web server, and SQLite as the DBMS. For server-side scripting, we’ll use PHP, a nice framework for protyping : easy to learn, and very popular.

Installing a fresh copy of Raspbian in an SD card

I’ll start from scratch, with a fresh Raspbian system. This is not strictly necessary : you can probably succeed from any working Raspbian installation.

You can start by downloading the Raspbian image, and installing it into the micro SD card. There are different instructions for Linux, for Windows, and for OS X.

For OS X, there are two sets of instructions. In my setup (MacBook Pro 13″, early 2011, Yosemite), only the second set worked, and I’ll explain it here. (If it doesn’t work for you, try the other one.)

Attention : we will be doing low-level disk transfers here. One single distraction and you may lose all your data. Your motivational PowerPoints. Those pictures of the 70th birthday of Auntie Joaquina. That spreadsheet you were almost finishing with all continuity mistakes in Bervely Hills, 90210. Gone ! Forever ! (No, seriously : backup your stuff. Really !)

(And my lawyer insists that I remind you : as always, proceed at your own risk. If you follow those instructions and your computer turns to a brick, or your cat divorces you, I’m not liable.)

First let’s unzip the disk image. Open a terminal in the folder where the image file is located, and type :

unzip 2015-05-05-raspbian-wheezy.zip

Now, let’s find out the device path to the SD card. First ensure that the SD card slot is empty, and type df -h in the shell. You should get something like :

$ df -h
Filesystem                          Size   Used  Avail Capacity   iused     ifree %iused  Mounted on
/dev/disk1                         930Gi  486Gi  444Gi    53% 127405636 116484474   52%   /
devfs                              183Ki  183Ki    0Bi   100%       636         0  100%   /dev
map -hosts                           0Bi    0Bi    0Bi   100%         0         0  100%   /net
map auto_home                        0Bi    0Bi    0Bi   100%         0         0  100%   /home
localhost:/p9Q694fzz20lCp5sv3CZyj  930Gi  930Gi    0Bi   100%         0         0  100%   /Volumes/MobileBackups

Now, insert the micro SD card in the slot (you might need an adapter), count five seconds, and type df-h again. You should get something like :

$ df -h
Filesystem                          Size   Used  Avail Capacity   iused     ifree %iused  Mounted on
/dev/disk1                         930Gi  486Gi  444Gi    53% 127405653 116484457   52%   /
devfs                              185Ki  185Ki    0Bi   100%       642         0  100%   /dev
map -hosts                           0Bi    0Bi    0Bi   100%         0         0  100%   /net
map auto_home                        0Bi    0Bi    0Bi   100%         0         0  100%   /home
localhost:/p9Q694fzz20lCp5sv3CZyj  930Gi  930Gi    0Bi   100%         0         0  100%   /Volumes/MobileBackups
/dev/disk2s1                        56Mi   20Mi   36Mi    36%       512         0  100%   /Volumes/boot

Got it ? There’s a new device /dev/disk2s1 for the SD card. You might get several new lines if the card currently has more than one partition (e.g. /dev/disk2s1, /dev/disk2s2, etc.)

We need to unmount all those partitions, but without ejecting the device (if you eject it, you’ll have to restart the whole process) :

sudo diskutil unmount /dev/disk2s1

Repeat that command for each partition in the device.

Now for the dangerous command. We will use the low-level copy command dd, using the disk image as input, and the SD card device as output. Take a deep breath and check each character thrice before hitting Enter, because overwriting the wrong device will be a nightmare.

Check out the partition names above. If you got /dev/disk2s1, you’ll write to /dev/rdisk2. If you got /dev/disk3s2, you’ll write to /dev/rdisk3, etc. Got it ? Just throw away the s<number> suffix and add an r to the beginning :

sudo dd bs=1m if=2015-05-05-raspbian-wheezy.img of=<device path to write>

This might take a while — in my system, 5 minutes or so. You may type ctrl+T to send the SIGINFO signal, and get an status update. Also, you might need to write bs=1m as bs=1M, depending on the version of dd.

This is it ! Eject the the card. Time to move to the Pi.

Configuring Raspbian

Insert the newly formatted SD card into the Pi, plug the needed devices. We will need at least : an ethernet connection to the Internet, a keyboard, and a monitor. Plug the power last.

With some luck, the Raspberry Pi Software Configuration Tool will appear during the first boot. (If it doesn’t appear, or if you are not using a fresh system, you can launch it by typing sudo raspi-config in the command shell.)

Be sure to at least activate option 1 (“Expand Filesystem”) so Raspbian will use the entire SD card. Otherwise the upgrade commands below risk running out of space. I will also use option 2 (“Change User Password”) to make access to the system secure ; option 3 (“Enable Boot to Desktop/Scratch“) to ensure that the system boots to Console Text (no need for graphical desktop in this small web server) ; option 4 (“Internationalization Options“) to configure my keyboard to US International; and option 8 (“Advanced Options“) to set the hostname, and to ensure that SSH access is enabled.

Updating and upgrading the system

Once the system reboots, log in (the default user is pi, and the defaul password, if you didn’t change it with option 2 above, is raspberry).

We’ll start by bringing the system up to speed :

sudo apt-get -y update
sudo apt-get -y dist-upgrade

Installing Lighttpd, SQLite, and PHP

Time to install the components of the web server. The instructions are adapted from smching’s Instructable.

sudo apt-get -y install lighttpd
sudo apt-get -y install sqlite3
sudo apt-get -y install php5 php5-common php5-cgi php5-sqlite 

Now for a bit of configuration. We’ll want to enable fastcgi to handle the PHP pages :

sudo lighty-enable-mod fastcgi
sudo lighty-enable-mod fastcgi-php
sudo service lighttpd force-reload # restart the Lighttpd service

Setting permissions

Finally, we need to set up the permissions of /var/www (the root of the web server), and include the default user on the group that can read/write that directory.

sudo chown -R www-data:www-data /var/www
sudo chmod -R 775 /var/www
sudo usermod -a -G www-data pi

Time for a reboot !

sudo reboot


If you have a second computer attached to your network, you can use it to access the recently installed web server. Before leaving the Pi, type ifconfig to get the IP. You should get something like…

$ ifconfig
eth0      Link encap:Ethernet  HWaddr b8:27:eb:7e:b2:3c  
          inet addr:  Bcast:  Mask:
          RX packets:218 errors:0 dropped:0 overruns:0 frame:0
          TX packets:128 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:47677 (46.5 KiB)  TX bytes:18722 (18.2 KiB)

lo        Link encap:Local Loopback  
          inet addr:  Mask:
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:8 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1104 (1.0 KiB)  TX bytes:1104 (1.0 KiB)

…showing that the IP attributed to the Pi on ethernet is

You can type on the browser of another computer to check the web server. You should see the placeholder page for Lighttpd showing that the web server is correctly installed.

Also — if you haven’t disabled SSH — you can type ssh pi@ to access the Pi remotely. This is useful if you have a single set of monitor/keyboard/mouse.

If the Pi is your only computer, you can start the graphical desktop with the command startx, and then navigate to the newly installed web server. Typing the IP address of the machine, as before, will work, but so does using the loopback interface : or http://localhost/.

Next, let’s test if PHP is working. Paste this simple test page…


…into /var/www/test.php.

Then, type in your browser (change for the actual IP of your Pi, or use http://localhost/test.php if browsing from inside the Pi). You should see the PHP info page.

Finally, let's run a small test on PHP + SQLite. I adapted this code from Veit Osiander's post on Scandio. Paste the following...

try {
    // Create file "scandio_test.db" as database
    $db = new PDO('sqlite:scandio_test.db');
    // Throw exceptions on error
    $sql = <<<SQL
    message TEXT,
    created_at INTEGER
    $data = array(
        'Test '.rand(0, 10),
        'Data: '.uniqid(),
        'Date: '.date('d.m.Y H:i:s')
    $sql = <<<SQL
INSERT INTO posts (message, created_at)
VALUES (:message, :created_at)
    $stmt = $db->prepare($sql);
    foreach ($data as $message) {
        $stmt->bindParam(':message', $message, SQLITE3_TEXT);
        $stmt->bindParam(':created_at', time());
    $result = $db->query('SELECT * FROM posts');
    foreach($result as $row) {
        list($id, $message, $createdAt) = $row;
        $output  = "Id: $id<br/>\n";
        $output .= "Message: $message<br/>\n";
        $output .= "Created at: ".date('d.m.Y H:i:s', $createdAt)."<br/>\n";
        echo $output;
    $db->exec("DROP TABLE posts");
} catch(PDOException $e) {
    echo $e->getMessage();
    echo $e->getTraceAsString();

...into /var/www/testdb.php.

Navigate to (or http://localhost/testdb.php), and if everything goes well, you should get something like :

Id: 1
Message: Test 1
Created at: 30.09.2015 05:34:33
Id: 2
Message: Data: 560b746950a70
Created at: 30.09.2015 05:34:33
Id: 3
Message: Date: 30.09.2015 05:34:33
Created at: 30.09.2015 05:34:33
The raspberry pi is perfect to prototype a small appliance with a web server as interface.

The raspberry pi is perfect to prototype a small appliance with a web server as interface.

If you get an ugly error, like "General error: 14 unable to open database file", double check the permissions of /var/www (and everything inside it). The user www-data must have reading and writing permission on everything (see the section "Setting permissions" above).

Otherwise, you are ready to start working on your project !

Deep Neural Network

From instance launch to model accuracy: an AWS/Theano walkthrough

My team has recently participated at Kaggle’s Diabetic Retinopathy challenge, and we won… experience. It was our first Kaggle challenge and we found ourselves unprepared for the workload.

But it was fun — and it was the opportunity to learn new skills, and to sharpen old ones. As the deadline approached, I used Amazon Web Services a lot, and got more familiar with it. Although we have our GPU infrastructure at RECOD, the extra boost provided by AWS allowed exploring extra possibilities.

But it was in the weekend just before the challenge deadline that AWS proved invaluable. Our in-house cluster went AWOL. What are the chances of having a power outage bringing down your servers and a pest control blocking your physical access to them in the weekend before a major deadline ? Murphy knows. Well, AWS allowed us to go down fighting, instead of throwing in the towel.

In this post, I’m compiling Markus Beissinger’s how-to and deeplearning.net tutorials into a single hyper-condensed walkthrough to get you as fast as possible from launching an AWS instance until running a simple convolutional deep neural net. If you are anything like me, I know that you are aching to see some code running — but after you scratch that itch, I strongly suggest you to go back to those sources and study them at leisure.

I’ll assume that you already know :

  1. How to create an AWS account ;
  2. How to manage AWS users and permissions ;
  3. How to launch an AWS instance.

Those preparations out of way, let’s get started ?

Step 1: Launch an instance at AWS, picking :

  • AMI (Amazon Machine Image) : Ubuntu Server 14.04 LTS (HVM), SSD Volume Type – 64-bit
  • Instance type : GPU instances / g2.2xlarge

For the other settings, you can use the defaults, but be careful with the security group and access key to not lock yourself out of the instance.

Step 2 : Open a terminal window, and log into your instance. In my Mac I type :

ssh -i private.pem ubuntu@xxxxxxxx.amazonaws.com

Where private.pem is the private key file of the key pair used when creating the instance, and xxxxxxxx.amazonaws.com is the public DNS of the instance. You might get an angry message from SSH, complaining that your .pem file is too open. If that happens, change its permissions with :

chmod go-rxw private.pem

Step 3 : Install Theano.

Once you’re inside the machine, this is not complicated. Start by making the machine up-to date :

sudo apt-get update
sudo apt-get -y dist-upgrade

Install Theano’s dependencies :

sudo apt-get install -y gcc g++ gfortran build-essential git wget linux-image-generic libopenblas-dev python-dev python-pip python-nose python-numpy python-scipy

Get the package for CUDA and install it :

wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1404/x86_64/cuda-repo-ubuntu1404_7.0-28_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1404_7.0-28_amd64.deb
sudo apt-get update
sudo apt-get install -y cuda

This last command is the only one that takes some time — you might want to go brew a cuppa while you wait. Once it is over, put CUDA on the path and reboot the machine :

echo 'export PATH=/usr/local/cuda/bin:$PATH' >> .bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH' >> .bashrc
sudo reboot

Log into the instance again and query for the GPU :

nvidia-smi -q

This should spit a lengthy list of details about the installed GPU.

Now you just have to install Theano. The one-liner below installs the latest version, and after the wait for the CUDA driver, runs anticlimactically fast :

sudo pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git

And that’s it ! You have Theano on your system.

Step 4 : Run an example.
Let’s take Theano for a run. The simplest sample from deeplearning.net that’s already interesting is the convolutional/MNIST digits tutorial. The sample depends on code written in the previous tutorials, MLP and Logistic Regression, so you have to download those too. You also have to download the data. The commands below do all that:

mkdir theano
mkdir theano/data
mkdir theano/lenet
cd theano/data
wget http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz
cd ../lenet
wget http://deeplearning.net/tutorial/code/logistic_sgd.py
wget http://deeplearning.net/tutorial/code/mlp.py
wget http://deeplearning.net/tutorial/code/convolutional_mlp.py

Finally, hit the magical command :

python convolutional_mlp.py

What ? All this work and the epochs will go by as fast as molasses on a winter’s day. What gives ?

You have to tell Theano to run on the GPU, otherwise it will crawl on the CPU. You can paste the lines below into your ~/.theanorc file :




…or you can use the one-liner below to create it:

echo -e '[global]\nfloatX=float32\ndevice=gpu\n\n[lib]\ncnmem=0.9\n\n[nvcc]\nfastmath=True' > ~/.theanorc

Try running the example again.

python convolutional_mlp.py

With some luck, you’ll note two differences: first, Theano will announce the use of the GPU…

Using gpu device 0: GRID K520 (CNMeM is enabled)

…and second, the epochs will run much, much faster !

(Image credit : networkworld.com).

Printing Multiple Copies of a Single Page on a Sheet in OS X

This is something that was making me crazy : getting multiple copies of a page into a single sheet in OS X — think of small fliers or business cards. The problem was particularly annoying in Adobe Creative Suite (Illustrator, Photoshop, InDesign), where I hoped (in vain) to find an option to do it easily. The straightforward solution (asking on the system Print dialog for multiple pages per sheet, and then asking for multiple copies) doesn’t really work.

The answer is as simple as Columbus’ egg : convert the document to PDF, duplicate the pages manually, and then ask for multiple pages per sheet. It works like a charm !

Need more guidance ? You’re in luck, for I did a video tutorial (my very first — be forgiving) to show the process step by step :

I’m demonstrating the solution on Illustrator CS5, but it works for any page that can be rendered on a PDF, so not only for Illustrator or Photoshop, but also for Microsoft Word and PowerPoint, or Apple Pages and Keynote.

Edit 8/dec : there’s a simpler procedure than the one explained above. Once you open the PDF in Preview, don’t duplicate the pages, and choose the number of Copies per page on the Preview tab (not the Layout tab) on the Print system dialog. I would have completely overlooked this if it weren’t for a helpful YouTube commenter, who also suggests that you can avoid the intermediate PDF step in Microsoft Word by, for example, putting  “1, 1, 1, 1” in Page Range (Copies & Pages tab), and then selecting 4 Pages per Sheet (Layout tab).

Macro Programming, Unit Testing

Macro programming is hell. Abuse the preprocessor and soon you’ll be joining the ranks of the fallen in the Turing tarpit.

The problem is compounded by the infamously unhelpful error messages of GCC. Today, I’ve spent a good part of an hour trying to decipher and debug this particularly mystifying one :

In file included from test_bitvector.c:14:0:
checkhelper.h:79:34: error: '#' is not followed by a macro parameter
 #define checkAssertXXInt(X, OP, Y) do { \

Check the source and see if you can find the problem. I’ll make your life really easy and promise you that the error is in this small snippet :

#define checkAssertXXInt(X, OP, Y) do { \
  unsigned __int128 _ck_x = (unsigned __int128) (X); \
  uintmax_t _ck_x_hi = (_ck_x >> 64); \
  uintmax_t _ck_x_lo = (_ck_x & 0xFFFFFFFFFFFFFFFF); \
  unsigned __int128 _ck_y = (unsigned __int128) (Y); \
  uintmax_t _ck_y_hi = (_ck_y >> 64); \
  uintmax_t _ck_y_lo = (_ck_y & 0xFFFFFFFFFFFFFFFF); \
  ck_assert_msg(_ck_x OP _ck_y, \
    "Assertion '%s' failed: '%s'==%jX.%016jX, '%s'==%jX.%016jX"#msg, #X#OP#Y, \
    #X, _ck_x_hi, _ck_x_lo, #Y, _ck_y_hi, _ck_y_lo); \
} while (0)

Found out ? No ? First hint : although GCC is technically right and there is a problem in the line shown, it would be much more helpful to show another (symmetrical) error.

Still nothing ? There is no shame : the macro parameter msg used with the # operator in line 87 was not declared in line 79.

Now imagine ferreting the little troll, among a flurry of cascading errors, from a 150-line source that includes 5 other headers. Maddening enough for a Pirsig treatise.

* * *

Ironically, the error occurred whilst I prepared a common header for my unit testing modules.

As a scientist who mostly writes hundred-line scripts, this is the very first time I am designing unit tests. I was worried that the learning curve for a testing library would be steep, and so I was immediately sold by the no-frills proposition of Check, a unit test framework for C. It comes complete with a tutorial that — frankly — if you are an academic programer like me, tells you everything you need to know. (Although I find the diff notation of the examples less friendly than it could be).

It took me half an hour to understand Check, and then I was joyously writing and running my tests. Of course, just as soon, I was already wondering how it could be perfected :

  1. Have the assertion helper macros (e.g., ck_assert_int_eq) allow the printing of extra info, like ck_assert_msg does ;
  2. Solve a small bug in the helper macros : they show the inspected expressions by embedding them directly in the printfed string. The problem : what happens when the expression has a modulo operator (%) ? Exactly ! Printf takes it as an escape character, and catastrophe ensues ;
  3. Have helper macros for the 128-bit integers that GCC offers as an extension to the standard integers of C ;
  4. Solve an slight aesthetical problem, since I much prefer camelCase for the helper functions than snake_case (But I reserve UGLY_UPPER_SNAKE_CASE for preprocessor macros with UGLY_SECONDARY_EFFECTS, which you MUST_REMEMBER, because THEY_BITE !) ;
  5. Include automatically the main() function — which is very much the same for every test. I took the opportunity to make it parse the command-line to set the verbosity of the output.

Below you’ll find the fruits of those amendments. Or if you prefer, here’s a link for the source repo. It is released under the same LGPL 2.1 license as Check. No warranties : if you choose to use it, and your computer turns into an Antikytherean device,  or your code opens a portal to Baator, I am not liable.

 Name        : checkhelper.h -- Helper header for Check unit test library
 Contributor : Eduardo A. do Valle Jr., 2014-01-27
 License     : LGPL 2.1 --- see http://check.sourceforge.net/COPYING.LESSER


#include <assert.h>
#include <check.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

/* --- Alternative macros for unit testing --- */

#define checkAbort ck_abort

#define checkAssert ck_assert

#define checkAbortInfo ck_abort_msg

#define checkAssertInfo ck_assert_msg

#define checkAssertInt(X, OP, Y) do { \
  intmax_t _ck_x = (X); \
  intmax_t _ck_y = (Y); \
  ck_assert_msg(_ck_x OP _ck_y, "Assertion '%s' failed: '%s'==%jd, '%s'==%jd", #X#OP#Y, #X, _ck_x, #Y, _ck_y); \
} while (0)

#define checkAssertUInt(X, OP, Y) do { \
  uintmax_t _ck_x = (X); \
  uintmax_t _ck_y = (Y); \
  ck_assert_msg(_ck_x OP _ck_y, "Assertion '%s' failed: '%s'==%ju, '%s'==%ju", #X#OP#Y, #X, _ck_x, #Y, _ck_y); \
} while (0)

#define checkAssertXInt(X, OP, Y) do { \
  uintmax_t _ck_x = (X); \
  uintmax_t _ck_y = (Y); \
  ck_assert_msg(_ck_x OP _ck_y, "Assertion '%s' failed: '%s'==%jX, '%s'==%jX", #X#OP#Y, #X, _ck_x, #Y, _ck_y); \
} while (0)

#define checkAssertString(X, OP, Y) do { \
  const char* _ck_x = (X); \
  const char* _ck_y = (Y); \
  ck_assert_msg(0 OP strcmp(_ck_y, _ck_x), "Assertion '%s' failed: '%s'==\"%s\", '%s'==\"%s\"", #X#OP#Y, #X, _ck_x, #Y, _ck_y); \
} while (0)

#define checkAssertIntInfo(X, OP, Y, msg, ...) do { \
  intmax_t _ck_x = (X); \
  intmax_t _ck_y = (Y); \
  ck_assert_msg(_ck_x OP _ck_y, "Assertion '%s' failed: '%s'==%jd, '%s'==%jd. "#msg, #X#OP#Y, #X, _ck_x, #Y, _ck_y, ## __VA_ARGS__); \
} while (0)

#define checkAssertUIntInfo(X, OP, Y, msg, ...) do { \
  uintmax_t _ck_x = (X); \
  uintmax_t _ck_y = (Y); \
  ck_assert_msg(_ck_x OP _ck_y, "Assertion '%s' failed: '%s'==%ju, '%s'==%ju. "#msg, #X#OP#Y, #X, _ck_x, #Y, _ck_y, ## __VA_ARGS__); \
} while (0)

#define checkAssertXIntInfo(X, OP, Y, msg, ...) do { \
  uintmax_t _ck_x = (X); \
  uintmax_t _ck_y = (Y); \
  ck_assert_msg(_ck_x OP _ck_y, "Assertion '%s' failed: '%s'==%jX, '%s'==%jX. "#msg, #X#OP#Y, #X, _ck_x, #Y, _ck_y, ## __VA_ARGS__); \
} while (0)

#define checkAssertStringInfo(X, OP, Y, msg, ...) do { \
  const char* _ck_x = (X); \
  const char* _ck_y = (Y); \
  ck_assert_msg(0 OP strcmp(_ck_y, _ck_x), \
    "Assertion '%s' failed: '%s'==\"%s\", '%s'==\"%s\". "#msg, #X#OP#Y, #X, _ck_x, #Y, _ck_y, ## __VA_ARGS__); \
} while (0)

#ifdef __SIZEOF_INT128__

    #define checkAssertXXInt(X, OP, Y) do { \
      unsigned __int128 _ck_x = (unsigned __int128) (X); \
      uintmax_t _ck_x_hi = (_ck_x >> 64); \
      uintmax_t _ck_x_lo = (_ck_x & 0xFFFFFFFFFFFFFFFF); \
      unsigned __int128 _ck_y = (unsigned __int128) (Y); \
      uintmax_t _ck_y_hi = (_ck_y >> 64); \
      uintmax_t _ck_y_lo = (_ck_y & 0xFFFFFFFFFFFFFFFF); \
      ck_assert_msg(_ck_x OP _ck_y, "Assertion '%s' failed: '%s'==%jX.%016jX, '%s'==%jX.%016jX", #X#OP#Y, #X, _ck_x_hi, _ck_x_lo, #Y, _ck_y_hi, _ck_y_lo); \
    } while (0)

    #define checkAssertXXIntInfo(X, OP, Y, msg, ...) do { \
      unsigned __int128 _ck_x = (unsigned __int128) (X); \
      uintmax_t _ck_x_hi = (_ck_x >> 64); \
      uintmax_t _ck_x_lo = (_ck_x & 0xFFFFFFFFFFFFFFFF); \
      unsigned __int128 _ck_y = (unsigned __int128) (Y); \
      uintmax_t _ck_y_hi = (_ck_y >> 64); \
      uintmax_t _ck_y_lo = (_ck_y & 0xFFFFFFFFFFFFFFFF); \
      ck_assert_msg(_ck_x OP _ck_y, "Assertion '%s' failed: '%s'==%jX.%016jX, '%s'==%jX.%016jX. "#msg, #X#OP#Y, #X, _ck_x_hi, _ck_x_lo, #Y, _ck_y_hi, _ck_y_lo, ## __VA_ARGS__); \
    } while (0)


/* --- Unless CHECK_HELPER_TEST_SUITE_NAME is defined with the name to use, testSuite() will be called to build the suite to run --- */

    #define CHECK_HELPER_TEST_SUITE_NAME testSuite

/* --- Unless CHECK_HELPER_MAIN_NAME is defined with the name to use, main() will be used --- */

    #define CHECK_HELPER_MAIN_NAME main

/* --- Provide a main() function by default, unless CHECK_HELPER_NO_MAIN is defined --- */


    Suite * CHECK_HELPER_TEST_SUITE_NAME(void);  // declaration

    int CHECK_HELPER_MAIN_NAME(int argc, char **argv) {

        enum print_output mode = CK_NORMAL;

        if      (argc>1 && (strcmp(argv[1], "-v")==0 || strcmp(argv[1], "--verbose")==0)) {
            mode = CK_VERBOSE;
        else if (argc>1 && (strcmp(argv[1], "-q")==0 || strcmp(argv[1], "--quiet")==0)) {
            mode = CK_SILENT;
        else if (argc>1 && (strcmp(argv[1], "-m")==0 || strcmp(argv[1], "--minimal")==0)) {
            mode = CK_MINIMAL;
        else if (argc>1 && (strcmp(argv[1], "-n")==0 || strcmp(argv[1], "--normal")==0)) {
            mode = CK_NORMAL;
        else if (argc>1 && (strcmp(argv[1], "-e")==0 || strcmp(argv[1], "--environment")==0)) {
            mode = CK_ENV;
        else if (argc>1) {
            #ifndef CHECK_HELPER_USAGE_STRING
            printf("usage: [test_executable] [verbosity flags: -(-q)uiet | -(-m)inimal | -(-n)ormal (default) | -(-v)erbose | -(-e)nvironment ]\n"
                   "       if -e or --environment get verbosity from environment variable CK_VERBOSITY (values: silent minimal normal verbose)\n");
            return 1;

        Suite *suite = CHECK_HELPER_TEST_SUITE_NAME();

        SRunner *suiteRunner = srunner_create(suite);
        srunner_run_all(suiteRunner, mode);
        int numberFailed = srunner_ntests_failed(suiteRunner);

        return (numberFailed == 0) ? EXIT_SUCCESS : EXIT_FAILURE;



Wow ! Much Homebrew. Very Numpy. So Scipy. Such OpenCV

The first time I tried to install NumPy+SciPy in my Mac, it turned into a Kafkaesque nightmare, out of which I only managed to surface due to luck and grit. (Only to have, a few weeks later, a system update breaking my MacPorts and sending everything back to hell.)

The second time around, I traded freedom for comfort, and went with Enthought Python Distribution (now Enthought Canopy).  EPD came with an impressive list of available packages, and, more importantly : it just worked. It was also generously available at no fee for academic use, an offer from which I’ve profited.

Recently though, I became a latecomer to Homebrew, enticed by their taglines (‘The missing package manager of OS X’, ‘MacPorts driving you to drink ? Try Homebrew !’) and by their oneliner installation procedure (look for ‘Install Homebrew’ at their homepage).

So far, I am incredibly impressed — I’ve done fresh installations of Python, Nose, NumPy, OpenCV, GCC (!), SciPy, Bottleneck, wxPython and PIL. All went smoothly, installing and testing without smoke. My command-line history reveals just how easy it was :

ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"

brew install python

/usr/local/bin/pip-2.7 install nose

/usr/local/bin/pip-2.7 install numpy

brew install opencv

brew install gcc49

brew install scipy

/usr/local/bin/pip-2.7 install bottleneck

brew install wxwidgets

/usr/local/bin/pip install pil

The only pitfall (if it may even be called so) is that some Python packages prefer the homebrew installer, and some prefer pip — but quick error and trial works just fine to find out.

Often homebrew installer will discreetly guide you through the process, like when I asked ‘brew install wxpython’, and it told me that there was no such package, but that  ‘wxwidgets’ already came with the wxPython bindings. That kind of gentle bending of Unix philosophy, on behalf of preserving the user sanity, never fails to win my respect.

Now : maybe homebrew is running so smoothly only because I have EPD already installed in this machine, all exoteric dependences having been previously solved. I also had Xcode fully installed and operational, a requirement for most interesting tools working on OS X at all. Remark also that I am still running Mountain Lion.

Homebrew’s express requirements seem to be quite modest, however : the Command-line Tools for Xcode, and a bash or zsh-compatible shell (the default terminal is fine). Additionally, it resides in a branch independent from EPD, so it probably can’t cound on the latter’s dependences. In a few weeks, I intend to do a fresh installation of Mavericks on this machine, and we will know for sure.

IT Industry and Users — a lesson in advanced BDSM

Case study :  you have an hour-long education video that you want to upload to your YouTube channel. The video needs minor editing : (1) trimming and eliminating a few portions; (2) splicing-in another short video; (3) maybe normalizing the sound.

On a 2011 MacBook Pro using OS X, your options are :

Use iMovie (bundled with OS X) — it will take 30′ importing the clips, then you’ll have editing capabilities much more advanced than you need. But it will take 10 hours for it to reencode the final video, and of course, there will be a compounded quality loss due to the double reencoding ;

Use Quicktime X Player (bundled with OS X) — contrarily to what “player” would suggest, Quicktime X gives you recording abilities, and very minor editing abilities, like trimming and gluing takes, without having to reencode the video. Sounds perfect, no ? The trimming feature is, however, practically useless, since it has very poor resolution (what is the use of  editing if you’ll end up having people being cut in the middle of a utterance ?). And you cannot remove inner portions of a video, just trim the edges : although you can go around this by trimming and gluing several times, for very large videos, it quickly becomes annoying.

Use Quicktime 7 Pro (US$ 30 on Apple Store US) — the old version of QuickTime has much finer trimming abilities, and it will allow you to remove inner portions of videos. Free trumps bought any day, but the price is not bad,  considering the time it could save. (What really ruffles my feathers is that Apple is basically selling you abandonware 30 bucks-a-piece, but Apple’s greediness shouldn’t really surprise anyone by now.) After you go through all the hoops to find out how to install QT7 on OS X Mountain Lion or Mavericks  (it does not work if you don’t get exactly the right version), you find the deal breaker : it is not available in Apple Store Brazil, and Apple Store US does not accept foreign credit cards.

(This is getting ridiculous.)

Use Avidemux (Free as in speech, and as in beer) — Jackpot ! This open-source project is meant to solve exactly your problem : make minor editing on videos without having to reencode the whole thing (although it has advanced reencoding abilities as well). Installing it is a bit of a hassle : you do all the clicks and have no sound on output. Some googling tells you that the problem is a preferences setting, but you can’t find the “Preferences” panel because a freaky bug has blanked out most GUI elements, including most of the menu items. One hour later, you finally have sound (having found out which blank menu item is “Preferences”, and which blank tab is “Sounds options”, and which blank field is “Sound device”). All that to find out that the preview window has a time delay on the play/pause/ff/rev buttons that makes it completely useless for precise editing purposes.

(If it weren’t four in the morning I would scream.)

Use ffmpeg (Free as in speech, and as in beer) — Video editing on the command-line : you may be an Apple user by choice, but even for you there is such a thing as too much masochism.

Use Final Cut Pro (US$ 300 in Apple Store US), Adobe Premiere CC (US$ 30 per month for Teachers and Students in US) — In order to cut and paste video pieces. Are you kidding ?

(Okay, you see where this is going —  I’ve tried and discarded many other solutions. For sake of curiosity my browser history shows :  “H.264 Cutter”, “Why am I doing this to myself”, “MPEG Streamclip”, “Mp4Split”, “Can a MacBook Survive a 4 story fall”, “Xilisoft Video cutter”, etc.)

And I didn’t find a solution ! In the end, I sucked it up and went with iMovies. Grudgingly.

Computer Users, we have to do something about this play : this is neither safe, nor sane. But Computer Industry, if you are hearing, this is definitely not consensual.

Update 13/dec : A typo in the link for my YouTube channel was misdirecting people to the fake tiniurl URL shortener, instead of the real tinyurl. My bad.

Update 18/dec : The story is, unfortunately, not over. Now iMovie refuses to generate an output with a mystifying “error 49” that, according to a Google Search, might mean everything and the kitchen sink. I’ve reverted to a rough editing with Quicktime X and got an output… without audio. I am arriving at the conclusion that I should give up IT entirely and move to the country to raise pigs. This whole “computing” idea should just be filed as “tried, did not work”.

Time Machine menu on notification areaUpdate 19/dec : apparently, iMovie and Time Machine don’t play nice with each other — which is one of the possible causes of the error 49. Unless you are rendering very short videos, it is better to disable Time Machine (click on the Time Machine icon on the notification bar — upper left corner — then “Open Time Machine Preferences”, then switch the big “On/Off” button. Remember to turn it on again once the Finalization or Export are over !). This worked for me, but it is no guarantee to make the pestering error disappear.

Oh Linux, you’ll never give me a boring weekend, will you ?

For the 100th time in my life I am installing Ubuntu in a machine — in my MacBook Pro this time.

Since it’s the third year of the second decade of the third millennium, I was expecting a dull “plug and play” procedure.  But it came as a nice surprise that Linux is still a wonderful unpredictable adventure. You never know whether installing your wireless card will take seconds or hours; whether or not upgrading your graphics card will result on a bricked system; or whether or not you’ll end the day throwing your computer out a window in frustration.

Just kidding — you know for sure that installing the wireless card will take weeks, and that you’ll end up throwing yourself out a window.

Sharing Mathematica with Yourself

Maybe I forgot to click on some half-hidden checkbox when I’ve installed it, but I’ve found out that my Mathematica 9 copy was working only for one of the users in my MacBook. That is annoying, because I keep separate users for my everyday usage,  and for giving presentations and classes (so there is zero risk that one of my friends suddenly appears on Skype telling a dirty joke in the middle of a presentation to the Schools’ president).

But I’ve found out the problem is easy to remediate. There are three folders where Mathematica 9 searches for the license files : $BaseDirectory/Licensing, $InstallationDirectory/Configuration/Licensing, and $UserBaseDirectory/Licensing (open a new Mathematica notebook and type the commands $BaseDirectory, etc. to find out exactly what the paths are in your system). Sure enough, mine was in $UserBaseDirectory/Licensing — meaning it was accessible by just that user.

This simple sequence of commands solved the problem :

$ sudo su
$ mkdir $BaseDirectory
$ mv $UserBaseDirectory/Licensing $BaseDirectory

Again, be sure to substitute the $variables above by the correct paths. I’ve double checked the permissions, and mine were already ok (all users had reading permissions). If it’s not the case for you, try this command :

$ chmod -R a+rX $BaseDirectory

And that is all. (I hope that doesn’t violate any terms, but I can’t see why it would : this kind of Mathematica license is per machine, and even if it were per user, well, both users are the same person, and they are never both “on” at once, isn’t it ?)

I don’t know if this Mathematica single-machine/multi-user license problem (or solution) applies for systems other than Mac OS X — if you find out, I’d be glad to know.

Mac OS X, Word and the quest for the unbloated PDF

Hard-science scholars are strange people, who insist on using TeX because it “typesets beautifully”, but then forget to check badness warnings, letting the lines spill beyond the right margin. I have resisted TeX as much as I could, until I finally caved to peer pressure. Still, I only use it for cooperative work : when all by myself, I want something, let’s say frankly, less Jurassic.

Still, I am forced to envy my frozen-in-1975 colleagues, when I find out that saving to PDF, an operation that the industry should had gotten right by now, turns my 1 MB Microsoft Word file into an 80 MB PDF-zilla.

I’ve spent a good part of my morning solving that problem, considering both the official solution, and more independente initiatives. The official solution flunked when I’ve found out that Adobe had no trial of Acrobat for Mac (am I really willing to spent US$ 500 just to find out whether or not it’ll do what I want?). I tried a PDF compression solution, PDF Shrink, which reduced my PDF… from 89 MB to 87 MB, while mangling horribly all the images: not exactly worth the US$ 35. I’ve also tried recreating the PDF from scratch, but PDF Studio, at US$ 125, just refused to open the Word file with a cryptic ‘error reading’ message. I was glad both were in trial.

In despair, I continued searching the Web. Lots of users crying “Large PDF !”, “Word PDF too big !”, “Huge PDFs on Mac !”, but very few answers. Industry, why u no listen ?

They say that we should never attribute to malice that which can be explained by incompetence. But Hanlon’s razor notwithstanding,  I couldn’t avoid drifting into conspiracy theories. What if that horrible implementation of PDF conversion was not completely accidental ?

Conspiracy theories are unfalsifiable, of course, but I’ll tell you what finally solved the problem and you’ll tell me if it doesn’t make you itsy bitsy suspicious :

  1. On Word, instead of saving to PDF, save to PostScript (using File… Print…, and then, on the print dialog, the PDF button on the lower left corner. The Save to PostsScript is one of the options);
  2. Open the PostScript file (double-click its icon) and let Preview make the automatic conversion;
  3. On Preview, save the file as PDF (using the menu File… Export… — or on older OS X versions File… Save as…)

And that’s all. Now lets check the sizes :

  • Original PDF file (using Save as PDF or Print to PDF from Word) : 89 MB
  • PostScript file (Using Print to PostScript) : 94 MB
  • Final PDF file (using the steps above) : 5 MB

That is, using only tools already present in OS X, and three small steps, I’ve got an almost 18x smaller file. Risking joining the ranks of the ‘moon hoax’ lunatics, I smell something rotten in the current state of PDF conversion implemented by Word–OS X.

* * *

Incidentally, I’ve found something I also needed : how to password-protect PDFs. I was ready to buy a solution, but I’ve found that unnecessary.

When creating a new one, on OS X, you can click on the “Security Options” menu of the “Export as” (“Save as” in older versions) dialog — how come I’ve never remarked that one ?.

If the PDF exists already, you can open it with Preview, go to File… Export… (File… Save as… in older OS X versions), and check the box “Encrypt”. Two textboxes below let you put a password. Save the file and it will be only visible after the password is entered.

EDIT 28/02/12 : I am finding out that the above method is by no means foolproof, i.e., it doesn’t work for every kind of PDF. In particular, I tested it for PDFs generated by PowerPoint, and it backfired (PostScript conversion generated a file much bigger). For image loaded PDFs from PowerPoint, contrarily to the mainly textual ones from Word, I’m finding that the usual tip of using the Quartz Filter (open the PDF file with Preview, then File… Export… [File… Save As… in older OS X versions], then select “Reduce File Size” on the Quartz Filter field in the dialog) works quite well.

EDIT 23/07/13 : I’ve never dreamed this blog entry was to become my most popular one. (Apple and Microsoft, aren’t you listening ?) I’ve edited the procedure above to reflect the change in the Save As… logic introduced in OS X Lion, when it became Export…

Buy me ! Upgrade me ! Register me !

It had happened just once before, but now that Parallels (the virtual environment of choice for Mac users) has launched its version 7, trying to use it is a constant source of irritation. Every other time I open the application, I am greeted with a huge colorful popup ad prompting me to upgrade. The “do not show me this again” checkbox is basically useless. I asked it politely to refrain from soliciting, to no avail.

I remember when shareware was a novel concept, and I used to download it by the dozens : nagging screens were part of the deal, something to be expected until you paid for a registered version. Since when those “features” have become cricket for bought wares ?

After wondering if I should open a support ticket, I’ve just reported the pestering behavior as a bug. Lets see if the development team will agree.