Archive for the ‘server’ Category

hadoop on ARM (part 2)

Posted: July 10, 2012 in linaro, server

If you’ve viewed part1, be sure to go back and have a relook as I had an error in some of the xml conf files.

Be aware this is a work in progress. What I have here works, but the steps to setup the cluster could use some polish and optimization for ease of use.

While hadoop running on one node is slightly interesting. hadoop running across several ARM nodes, now that’s more like it. It’s time to add in additional nodes. In my case I’m going to have a 4 node hadoop cluster made up of 3 TI panda boards and 1 freescale imx53. Let’s walk through the steps to get that up and running.  At this end of this exercise, there’s a great opportunity to have a hadoop-server image which is mostly setup.

Network Topology

In hadoop you must specify one machine as the master. The rest will be slaves. So across the collection of machines, let’s first get the hostnames all setup and organized. You’ll want to also get all the machines setup with static ip addresses.

One way to setup a static ip address is to edit /etc/network/interfaces. All my machines are on a 192.168.1.x network. You’ll have to make adjustments as are appropriate for your setup. Note in this example the line involving dhcp is commented out:

auto lo
iface lo inet loopback
auto eth0
#iface eth0 inet dhcp
iface eth0 inet static

Next we’ll update the /etc/hosts file with entries for master and all the slaves across all the machines. Edit /etc/hosts on the respective machines with their ip address to name mappings. Here’s an example from the system that is named master. Note the respective system name appears on first line:       localhost master
::1             localhost ip6-localhost ip6-loopback
fe00::0         ip6-localnet
ff00::0         ip6-mcastprefix
ff02::1         ip6-allnodes
ff02::2         ip6-allrouters master slave1 slave2 slave3

Next it’s time to update the /etc/hostname file on all the systems with it’s name. On my system named master  for instance it’s as simple as this:


With that complete, go back to my previous post and repeat those steps for each node. However there are some exceptions.

  1. If you issue make sure you shut down the node with
  2. Reboot each node after you’ve completed.

All done? Good! Now we’re ready to connect the nodes so that work will be dispatched across the cluster. First we net to get hduser’s pub ssh key onto all the slaves. As the hduser on your master node, issue the following for each slave node:

master$ ssh-copy-id -i $HOME/.ssh/ hduser@slave1

Afterwards test that you can ssh from the master to each of the slave nodes. It’s extremely important this works.

master $ slogin slave1

Multinode hadoop configuration

Now it’s time to configure the various services that will run. Just like in the first blog post, you can have your master node run as both a slave node and a master node, or you can have it run just as a master node. Up to you!

On the master node or nodes, it’s their job to run NodeNode and JobTracker. On the slave nodes, it’s their job to run DataNode and TaskTracker. We’ll now configure this.

On the master machine as root edit /usr/local/hadoop/conf/masters and change to the name of your master machine and localhost:


Now on the master machine we are going to tell it what nodes are it’s slaves. This is done by listing the names of the machines on /usr/local/hadoop/conf/slaves. If you want the master machine to also serve as a slave then you need to list it. If you don’t want the master to be a slave then you should remove the entry for localhost and make sure the name of the master isn’t listed. You have to update this file on all machines in the cluster.


Now for all machines in the cluster, we need to update /usr/local/hadoop/conf/core-site.xml. Specifically look for


and change localhost to the name of your master node.


Now we’re going to update /usr/local/hadoop/conf/mapred-site.xml again and all machines which specifies where the JobReducer is run. This runs on our master node. Look for


and change to


Next on all nodes edit /usr/local/hadoop/conf/hdfs-site.xml to adjust the value for dfs.replication. This value needs to be equal to or less then the number of slave nodes you have in your cluster. Specifically it controls how many nodes data has to be copied to before it the job starts. The default for the file if unchanged is 3. If the number is larger than the number of slave nodes that you have, your jobs will experience errors. Here’s how mine is setup which is acceptable since I have a total of 4 nodes.


Next on the master node, su – hduser and issue the following to format the hdfs file system.

master ~$ hadoop namenode -format

Starting the cluster daemons

Now it’s time to start the cluster. Here’s the high level order:

  1. HDFS daemons are started, this starts the NameNode daemon on the master node
  2. DataNode daemons are started on all slaves
  3. JobTracker is started on master
  4. TaskTracker daemons are started on all slaves

Here’s the commands in practice. On master as hduser:

hduser@master:~$ start-dfs.h

Presuming things successfully start you can run jps and see the following:

hduser@master:~$ jps
3658 Jps
3203 DataNode
3536 SecondaryNameNode
2920 NameNode

If you were to also run jps on your slave nodes you’d notice that DataNode is also running there.

Now we will start the MapReducer and JobTracker on master. This is done with:


Presuming you don’t encounter any errors, your cluster should be fully up and running. In my next blog post I’ll do some runs with the cluster and do some performance measurements.

hadoop on ARM (part 1)

Posted: July 8, 2012 in linaro, server

Updated: Slight HTML error on my part caused missing elements in some of the xml conf files.

The next step in my linaro based armhf server image, is to install and run hadoop. This blog post follows the general hadoop instructions found in the apache wiki for ubuntu with updates specifically for a Linaro based install and for ARM.

First we need java.

# apt-get install openjdk-6-jdk

after it’s installed, let’s validate things are fine.

# java -version
java version "1.6.0_24"
OpenJDK Runtime Environment (IcedTea6 1.11.1) (6b24-1.11.1-4ubuntu3)
OpenJDK Zero VM (build 20.0-b12, mixed mode)

Now we need to create a user and group for hadoop

# addgroup hadoop
# adduser --ingroup hadoop hduser

If you haven’t already make sure you have openssh-server installed

# apt-get install openssh-server

Now we need to gen keys for the hduser account.

# su - hduser
$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/
The key fingerprint is:
9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2 hduser@ubuntu
The key's randomart image is:
$ cat ~/.ssh/ >> ~/.ssh/authorized_keys

Yes we’ve created a key with an empty password. This is a test setup. Not a production setup. Now let’s connect locally to make sure everything is ok.

$ slogin localhost

Be sure to connect all the way in to a command line. If that works, you’re in good shape. Exit.

Now on the advice of others, we’re going to disable ipv6.

echo 'net.ipv6.conf.all.disable_ipv6 = 1' >> /etc/sysctl.conf
echo 'net.ipv6.conf.default.disable_ipv6 = 1' >> /etc/sysctl.conf
echo 'net.ipv6.conf.lo.disable_ipv6 = 1' >> /etc/sysctl.conf

And then to make things take affect

# sysctl -p

Now we’re ready to install hadoop. Unfortunately there are not as of yet hadoop packages so we’ll have to install it from source. hadoop as it turns out is written in java, so it’s a just matter of installation, and not a build from source. Download hadoop-1.0.3.tar.gz from here.

# cd /usr/local
# tar xfz hadoop-1.0.3.tar.gz
# ln -s hadoop-1.0.3 hadoop
# mkdir hadoop/logs
# chown -R hduser:hadoop hadoop

Now we need to set up some environment vars for the hduser.

# su - hduser
$ vi ~/.bashrc

and add the following to the end of the file

# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-openjdk-armhf

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
# Requires installed 'lzop' command.
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less

# Add Hadoop bin/ directory to PATH

Save and exit from the hduser account.

Now as root again, edit /usr/local/hadoop/conf/ and add

export JAVA_HOME=/usr/lib/jvm/java-6-openjdk-armhf

Now edit /usr/local/hadoop/conf/core-site.xml and add the following between the configure tags. Feel free to change the temp directory to a different location. This will be where HDFS, the Hadoop Distributed File System will be putting it’s temp files.

<!-- In: conf/core-site.xml -->
  <description>A base for other temporary directories.</description>

  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>

Now we need to create the directory

# mkdir -p /fs/hadoop/tmp
# chown hduser:hadoop /fs/hadoop/tmp
# chmod 750 /fs/hadoop/tmp

Now edit /usr/local/hadoop/conf/mapred-site.xml and again drop the following after the configuration tag.

<!-- In: conf/mapred-site.xml -->
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.

Now edit /usr/local/hadoop/conf/hdfs-site.xml and again add the following after the configuration tag

<!-- In: conf/hdfs-site.xml -->
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.

Now we are going to setup the HDFS filesystem.

# su - hduser
$ /usr/local/hadoop/bin/hadoop namenode -format

You will see output that should resemble the following

hduser@linaro-server:~$ /usr/local/hadoop/bin/hadoop namenode -format
Warning: $HADOOP_HOME is deprecated.

12/07/09 03:58:09 INFO namenode.NameNode: STARTUP_MSG: 
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = linaro-server/
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.0.3
STARTUP_MSG:   build = -r 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
12/07/09 03:58:12 INFO util.GSet: VM type       = 32-bit
12/07/09 03:58:12 INFO util.GSet: 2% max memory = 19.335 MB
12/07/09 03:58:12 INFO util.GSet: capacity      = 2^22 = 4194304 entries
12/07/09 03:58:12 INFO util.GSet: recommended=4194304, actual=4194304
12/07/09 03:58:18 INFO namenode.FSNamesystem: fsOwner=hduser
12/07/09 03:58:20 INFO namenode.FSNamesystem: supergroup=supergroup
12/07/09 03:58:20 INFO namenode.FSNamesystem: isPermissionEnabled=true
12/07/09 03:58:20 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
12/07/09 03:58:20 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
12/07/09 03:58:20 INFO namenode.NameNode: Caching file names occuring more than 10 times 
12/07/09 03:58:21 INFO common.Storage: Image file of size 112 saved in 0 seconds.
12/07/09 03:58:23 INFO common.Storage: Storage directory /fs/hadoop/tmp/dfs/name has been successfully formatted.
12/07/09 03:58:23 INFO namenode.NameNode: SHUTDOWN_MSG: 
SHUTDOWN_MSG: Shutting down NameNode at linaro-server/

Now it’s time to start our single node cluster. Run this as the hduser

$ /usr/local/hadoop/bin/

If all is well you’ll see something like the following:

hduser@linaro-server:~$ /usr/local/hadoop/bin/
Warning: $HADOOP_HOME is deprecated.

starting namenode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-namenode-linaro-server.out
localhost: starting datanode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-datanode-linaro-server.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-secondarynamenode-linaro-server.out
starting jobtracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-jobtracker-linaro-server.out
localhost: starting tasktracker, logging to /usr/local/hadoop-1.0.3/libexec/../logs/hadoop-hduser-tasktracker-linaro-server.out

Last but not least, the jps command will show you the hadoop processes. Tho technically the jps tool is showing you the java processes on the system.

ARM Profiling and nginx

Posted: July 5, 2012 in linaro, server

In what is seeming to be a serial, I’ve been working with my linaro-server image and experimenting with nginx to see how much performance one might be able to eek out of a pandaboard pressed into being a word press server.

This morning I thought I would do a little profiling to get an idea where nginx is processor bound.

When driving work with apache bench, in top I see:

top - 10:46:24 up 8 days, 19:08,  3 users,  load average: 0.80, 1.05, 0.63
Tasks:  99 total,   4 running,  95 sleeping,   0 stopped,   0 zombie
Cpu(s):  7.6%us, 21.4%sy,  0.0%ni, 15.4%id,  0.0%wa,  0.0%hi, 55.6%si,  0.0%st
Mem:    974668k total,   825288k used,   149380k free,   142748k buffers
Swap:  1847468k total,       32k used,  1847436k free,   529984k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                            
21165 www-data  20   0 25656 3132  904 R   28  0.3   5:37.05 nginx                               
    3 root      20   0     0    0    0 S   27  0.0   1:47.97 ksoftirqd/0                         
21168 www-data  20   0 25728 3124  908 R   25  0.3   0:31.56 nginx                               
31721 root      20   0     0    0    0 S   24  0.0   0:03.63 kworker/0:1                         
21164 www-data  20   0 25576 3040  904 S   18  0.3   5:39.60 nginx                               
    7 root      RT   0     0    0    0 S   15  0.0   1:19.26 watchdog/0                          
21166 www-data  20   0 25932 3332  904 S   13  0.3   5:43.53 nginx                               
21167 www-data  20   0 25528 2908  904 S    9  0.3   5:35.88 nginx                               
21169 www-data  20   0 25160 2668  920 R    6  0.3   5:48.92 nginx                               
31804 root      20   0  2144 1016  760 R    1  0.1   0:00.94 top                                 
30605 root      20   0     0    0    0 S    1  0.0   0:02.44 kworker/1:1                         
31688 root      20   0  8172 2632 2008 S    1  0.3   0:01.10 sshd

This is probably a good indication that we’re cpu bound in the nginx code. Let’s however not jump to conclusions.

I was going to install oprofile (old habits die hard) but it’s been removed from the archive. (bug #653168). RIP oprofile. People have moved on to perf.

I installed perf. I quickly ran into the following. Consider

root@linaro-server:~# perf record -a -g sleep 20 
perf_3.3.1-39 not found
You may need to install linux-tools-3.3.1-39

You can work around this by calling the versioned perf command explicitly to get around the drain bramage. /usr/bin/perf is nothing more than a shell script anyway that tries to match the version of perf to the kernel you’re running.

to collect system wide

root@linaro-server:~# perf_3.2.0-26 record -a -g sleep 20

Which yields

+  17.34%          nginx  [kernel.kallsyms]   [k] __do_softirq
+  11.85%        swapper  [kernel.kallsyms]   [k] omap_default_idle
+   7.93%          nginx  [kernel.kallsyms]   [k] _raw_spin_unlock_irqrestore
+   3.55%          nginx  [kernel.kallsyms]   [k] __aeabi_llsr
+   3.13%        swapper  [kernel.kallsyms]   [k] __do_softirq
+   2.55%    ksoftirqd/0  [kernel.kallsyms]   [k] _raw_spin_unlock_irqrestore
+   1.30%        swapper  [kernel.kallsyms]   [k] _raw_spin_unlock_irqrestore
+   0.95%    ksoftirqd/0  [kernel.kallsyms]   [k] __aeabi_llsr
+   0.91%          nginx  [kernel.kallsyms]   [k] sub_preempt_count.part.61
+   0.71%          nginx  [kernel.kallsyms]   [k] sk_run_filter
+   0.61%          nginx  [kernel.kallsyms]   [k] add_preempt_count
+   0.60%          nginx  [kernel.kallsyms]   [k] tcp_clean_rtx_queue
+   0.60%          nginx  [kernel.kallsyms]   [k] kfree
+   0.57%          nginx  [kernel.kallsyms]   [k] __copy_skb_header
+   0.56%          nginx  [kernel.kallsyms]   [k] __rcu_read_unlock
+   0.53%          nginx  [kernel.kallsyms]   [k] tcp_v4_rcv
+   0.52%          nginx  [kernel.kallsyms]   [k] kmem_cache_alloc
+   0.52%          nginx  [kernel.kallsyms]   [k] _raw_spin_unlock
+   0.49%          nginx  [kernel.kallsyms]   [k] __netif_receive_skb
+   0.44%          nginx  [kernel.kallsyms]   [k] kmem_cache_free
+   0.40%          nginx  [kernel.kallsyms]   [k] skb_release_data
+   0.38%          nginx  [kernel.kallsyms]   [k] skb_release_head_state
+   0.38%          nginx  [kernel.kallsyms]   [k] __inet_lookup_established
+   0.36%          nginx  [kernel.kallsyms]   [k] sub_preempt_count
+   0.36%          nginx  [kernel.kallsyms]   [k] pfifo_fast_dequeue
+   0.36%          nginx  [kernel.kallsyms]   [k] packet_rcv_spkt
+   0.35%        swapper  [kernel.kallsyms]   [k] __aeabi_llsr
+   0.35%          nginx  [kernel.kallsyms]   [k] __copy_from_user
+   0.34%          nginx  [kernel.kallsyms]   [k] __memzero
+   0.32%          nginx  [kernel.kallsyms]   [k] __rcu_read_lock
+   0.30%          nginx  [kernel.kallsyms]   [k] __kfree_skb
+   0.30%          nginx  [kernel.kallsyms]   [k] skb_push
+   0.30%          nginx  [kernel.kallsyms]   [k] tcp_ack
+   0.30%    ksoftirqd/0  [kernel.kallsyms]   [k] tcp_v4_rcv
+   0.29%          nginx  [kernel.kallsyms]   [k] kfree_skbmem
+   0.28%          nginx  [kernel.kallsyms]   [k] ip_rcv
+   0.27%          nginx  [kernel.kallsyms]   [k] sch_direct_xmit
+   0.27%          nginx  [kernel.kallsyms]   [k] ip_route_input_common
+   0.27%    ksoftirqd/0  [kernel.kallsyms]   [k] tcp_clean_rtx_queue
+   0.26%          nginx  [kernel.kallsyms]   [k] enqueue_to_backlog
+   0.26%    ksoftirqd/0  [kernel.kallsyms]   [k] kfree
+   0.24%          nginx  [kernel.kallsyms]   [k] __skb_clone
+   0.24%          nginx  [kernel.kallsyms]   [k] _raw_spin_unlock_irq
+   0.24%    ksoftirqd/0  [kernel.kallsyms]   [k] sk_run_filter
+   0.24%          nginx  [kernel.kallsyms]   [k] dev_queue_xmit
+   0.24%          nginx  [kernel.kallsyms]   [k] __qdisc_run
+   0.24%          nginx  [kernel.kallsyms]   [k] tcp_rcv_state_process
+   0.24%          nginx  [kernel.kallsyms]   [k] tcp_transmit_skb
+   0.23%          nginx  [kernel.kallsyms]   [k] tcp_validate_incoming
+   0.23%          nginx  [kernel.kallsyms]   [k] skb_clone
+   0.23%    ksoftirqd/0  [kernel.kallsyms]   [k] __do_softirq
+   0.22%          nginx  [kernel.kallsyms]   [k] ip_local_deliver_finish
+   0.22%          nginx  [kernel.kallsyms]   [k] __raw_spin_lock_bh
+   0.22%    ksoftirqd/0  [kernel.kallsyms]   [k] __copy_skb_header
+   0.21%          nginx  [kernel.kallsyms]   [k] memcpy

And then collecting from a worker bee

root@linaro-server:~# perf_3.2.0-26 record -a -g -p 21164 sleep 20 
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.178 MB (~7759 samples) ]

Worker bee data:

+  24.15%  nginx  [kernel.kallsyms]   [k] __do_softirq
+  11.56%  nginx  [kernel.kallsyms]   [k] _raw_spin_unlock_irqrestore
+   3.91%  nginx  [kernel.kallsyms]   [k] __aeabi_llsr
+   1.81%  nginx  [kernel.kallsyms]   [k] sub_preempt_count.part.61
+   1.08%  nginx  [kernel.kallsyms]   [k] tcp_v4_rcv
+   1.02%  nginx  [kernel.kallsyms]   [k] __rcu_read_unlock
+   0.96%  nginx  [kernel.kallsyms]   [k] sub_preempt_count
+   0.91%  nginx  [kernel.kallsyms]   [k] __copy_skb_header
+   0.85%  nginx  [kernel.kallsyms]   [k] kmem_cache_alloc
+   0.79%  nginx  [kernel.kallsyms]   [k] __copy_from_user
+   0.79%  nginx  [kernel.kallsyms]   [k] skb_push
+   0.79%  nginx  [kernel.kallsyms]   [k] add_preempt_count
+   0.68%  nginx  [kernel.kallsyms]   [k] kfree
+   0.68%  nginx  [kernel.kallsyms]   [k] memcpy
+   0.68%  nginx  [kernel.kallsyms]   [k] _raw_spin_unlock
+   0.62%  nginx  [kernel.kallsyms]   [k] __rcu_read_lock
+   0.57%  nginx  [kernel.kallsyms]   [k] vector_swi
+   0.57%  nginx  [kernel.kallsyms]   [k] kfree_skbmem
+   0.57%  nginx  [kernel.kallsyms]   [k] dev_hard_start_xmit
+   0.57%  nginx  [kernel.kallsyms]   [k] sk_run_filter
+   0.57%  nginx  [kernel.kallsyms]   [k] __inet_lookup_established
+   0.51%  nginx  [kernel.kallsyms]   [k] __netif_receive_skb
+   0.51%  nginx  [kernel.kallsyms]   [k] dev_queue_xmit
+   0.51%  nginx  [kernel.kallsyms]   [k] ip_rcv
+   0.51%  nginx  [kernel.kallsyms]   [k] tcp_recvmsg
+   0.51%  nginx  [kernel.kallsyms]   [k] packet_rcv_spkt
+   0.45%  nginx  [kernel.kallsyms]   [k] get_parent_ip
+   0.45%  nginx  [kernel.kallsyms]   [k] __kmalloc
+   0.45%  nginx  [kernel.kallsyms]   [k] usbnet_start_xmit
+   0.45%  nginx  [kernel.kallsyms]   [k] ip_local_deliver_finish
+   0.45%  nginx  [kernel.kallsyms]   [k] tcp_rcv_state_process
+   0.45%  nginx  [kernel.kallsyms]   [k] _raw_spin_lock
+   0.40%  nginx        [.] memcpy
+   0.40%  nginx  [kernel.kallsyms]   [k] kmem_cache_free
+   0.40%  nginx  [kernel.kallsyms]   [k] smsc95xx_tx_fixup
+   0.40%  nginx  [kernel.kallsyms]   [k] sk_stream_alloc_skb
+   0.40%  nginx  [kernel.kallsyms]   [k] tcp_clean_rtx_queue
    0.34%  nginx        [.] _int_free
+   0.34%  nginx  [kernel.kallsyms]   [k] local_bh_enable
+   0.34%  nginx  [kernel.kallsyms]   [k] __aeabi_idiv
+   0.34%  nginx  [kernel.kallsyms]   [k] strlen
+   0.34%  nginx  [kernel.kallsyms]   [k] usbnet_bh
+   0.34%  nginx  [kernel.kallsyms]   [k] kfree_skb
+   0.34%  nginx  [kernel.kallsyms]   [k] net_rx_action
+   0.34%  nginx  [kernel.kallsyms]   [k] sch_direct_xmit
+   0.34%  nginx  [kernel.kallsyms]   [k] tcp_sendmsg
+   0.34%  nginx  [kernel.kallsyms]   [k] tcp_ack
+   0.34%  nginx  [kernel.kallsyms]   [k] __raw_spin_lock_irqsave
+   0.28%  nginx  nginx               [.] ngx_radix_tree_create
    0.28%  nginx  nginx               [.] ngx_http_core_merge_loc_conf
    0.28%  nginx  nginx               [.] ngx_http_header_filter
+   0.28%  nginx  nginx               [.] 0x5e5be
+   0.28%  nginx  [kernel.kallsyms]   [k] in_lock_functions
+   0.28%  nginx  [kernel.kallsyms]   [k] aa_revalidate_sk
+   0.28%  nginx  [kernel.kallsyms]   [k] __aeabi_uidiv
+   0.28%  nginx  [kernel.kallsyms]   [k] md5_transform
+   0.28%  nginx  [kernel.kallsyms]   [k] illegal_highdma
+   0.28%  nginx  [kernel.kallsyms]   [k] enqueue_to_backlog

Pretty consistant system wide and with an nginx worker bee process. We’re spending a lot of time in kernel in __do_softirq. This makes sense as we’re driving a lot of network activity. There doesn’t appear to be any nginx related low hanging fruit. It’s probably time to take a look at the network device driver for the panda board for both performance, the  page allocation failures and connection resets that apache bench reports when the number of concurrent connections is in the 400+ range.


Now at this point you might be thinking the next step in the journey is into the kernel source code. Not quite. Don’t forget that located in /proc/sys/ are a number of kernel values which can be set for various work loads without a recompile of the kernel.

In our case let’s walk through the list.

cd /proc/sys
cat fs/file-max 

file-max is the maximum number of open files on your system. 80,000 might seem like a big number but recall that between the front end and back end, we are using /dev/shm/php-fpm-www.sock so as traffic increases the number of open files through sockets is going to rise dramatically. Here’s how you set the limit for each boot.

echo 'fs.file-max=209708' >> /etc/sysctl.conf

Next is netdev_max_backlog which is the maximum number of packets, queued on the INPUT side.

cat net/core/netdev_max_backlog 

A 1000 for all the network traffic we’d like to be able to handle is quite low.

echo 'net.core.netdev_max_backlog = 5000' >> /etc/sysctl.conf

Next is net.core.somaxconn which limits the socket listen() backlog. Another value to increase in order to handle lots of incoming connections.

cat net/core/somaxconn 

128! A low number so another value to increase.

echo 'net.core.somaxconn = 4096' >> /etc/sysctl.conf

Last net.ipv4.tcp_max_syn_backlog sets the limit on sockets which are incomplete sockets.

cat net/ipv4/tcp_max_syn_backlog 

For our purposes this is low so this is another value to increase.

echo 'net.ipv4.tcp_max_syn_backlog = 4096' >> /etc/sysctl.conf

At this point you might think you need to reboot your machine for the new values to take effect. No. All you need do is this:

sysctl -p

Doing so with our new values in place and now apache bench runs with concurrent requests set in the 500, 600, 700 ranges no longer fail to complete. Things are stable such that I can do more scientific measurements, compute confident intervals and so on as you’d expect to put some strong meaning behind the numbers.

Thus far between the 3 blog posts to tune a nginx wordpress server on ARM, what have we done that was ARM specific? Not much. These last kernel settings are more of a concern on any machine with modest resources, they aren’t ARM specific. When tuning a Linux server, you’re typically going to look at these kinds of values.

Are we done? If we want to go hardcore, I think we’d up to the point where profiling the pandaboard ethernet driver would be the next step. No small piece of work but probably quite interesting.

Thus far being able to handle ~ 70 connections a second isn’t bad but can we do better?


Let’s continue where we left off.

  1. From your WordPress dashboard, select plug-ins. This time we’re going to install the WordPress Nginx proxy cache integrator. As before search, and install.
  2. Now edit /etc/nginx and add
            proxy_cache_path /dev/shm/nginx levels=1:2 keys_zone=czone:16m max_size=32m inactive=10m;
            proxy_temp_path /dev/shm/nginx/tmp;
            proxy_cache_key "$scheme$host$request_uri";
            proxy_cache_valid 200 302 10m;

    directly after the line that has server_names_hash_bucket_size 64;

  3. edit /etc/hosts modifying the entry for localhost localhost backend frontend
  4. Now edit /etc/nginx/conf.d/default.conf, and change everything before
    # BEGIN W3TC Page Cache cache


    server {
        server_name frontend;
    location / {
    proxy_pass http://backend:8080;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_cache czone;
    server {
        server_name backend;
        root /var/www/;
        listen 8080;
        index index.html index.htm index.php;
        include conf.d/drop;
            location / {
                    # This is cool because no php is touched for static content
                            try_files $uri $uri/ /index.php?q=$uri&$args;
    location ~ \.php$ {
    fastcgi_buffers 8 256k;
    fastcgi_buffer_size 128k;
    fastcgi_intercept_errors on;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_pass unix:/dev/shm/php-fpm-www.sock;
  5. Restart services with
    service php5-fpm restart
    service nginx restart

Now we’re ready for more speed tests with ApacheBench.

That’s quite a speed up. This configuration is able to handle about 200 concurrent dispatches a second before performance starts to drop off. Even at 300 connections per second, the system is still able to handle requests faster than they are coming in however the latency for each request is starting to build. At 400 the system is able to process just as many requests as are coming in.

Here’s a dstat of a 400 concurrent connections run. There’s an interesting behavior that occasionally shows up here.

root@linaro-server:/etc/nginx/conf.d# dstat
You did not select any stats, using -cdngy by default.
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw 
  1   0  97   1   0   0|3069B   21k|   0     0 |   0     0 | 144    66 
  0   1  99   0   0   0|   0     0 | 150B 1082B|   0     0 | 207    44 
  0   0 100   0   0   0|   0     0 |  52B  366B|   0     0 | 106    56 
  0   0 100   0   0   0|   0     0 | 104B  492B|   0     0 |  96    50 
  0   0 100   0   0   0|   0     0 | 104B  732B|   0     0 | 104    54 
  3   0  88   1   0   8|   0    40k| 101k   43k|   0     0 | 639    76 
 48  33   0   0   0  19|   0     0 | 196k  375k|   0     0 |2578   705 
 47  33   0   0   0  21|   0     0 | 306k  539k|   0     0 |3391   593 
 45  33   0   0   0  22|   0     0 | 404k 4112k|   0     0 |4545   696 
 63  16   0   0   0  22|   0     0 | 458k 7742k|   0     0 |6007  1656 
 96   4   0   0   0   0|   0     0 |2600B   66k|   0     0 | 525   760 
 94   6   0   0   0   0|   0    48k|3894B   98k|   0     0 | 605   886 
 94   6   0   0   0   0|   0    24k|3940B   98k|   0     0 | 591   809 
 93   7   0   0   0   0|   0    16k|3675B   87k|   0     0 | 564   833 
 97   3   0   0   0   0|   0     0 |3062B   76k|   0     0 | 565   829 
 96   3   0   0   0   0|   0     0 |3432B   87k|   0     0 | 547   828 
 97   3   0   0   0   0|   0    24k|3796B   98k|   0     0 | 561   843 
 96   4   0   0   0   0|   0     0 |3016B   76k|   0     0 | 543   855 
 95   5   0   0   0   0|   0     0 |4078B   98k|   0     0 | 574   809 
 97   3   0   0   0   1|   0     0 |3227B   76k|   0     0 | 563   802 
 96   4   0   0   0   0|   0     0 |3848B   98k|   0     0 | 573   839 
 97   3   0   0   0   0|   0    24k|2600B   66k|   0     0 | 554   750 
 94   5   0   0   0   1|   0     0 |3848B   98k|   0     0 | 581   881 
 97   3   0   0   0   0|   0     0 |3016B   76k|   0     0 | 567   791 
 95   5   0   0   0   0|   0     0 |3432B   87k|   0     0 | 564   840 
 97   3   0   0   0   0|   0     0 |3940B   98k|   0     0 | 567   782 
 38   2  59   0   0   1|   0    48k|  18k   99k|   0     0 | 559   216 
  0   1  99   0   0   0|   0     0 |  52B  366B|   0     0 | 153    62 
  0   1  99   0   0   0|   0     0 | 104B  492B|   0     0 | 112    48 
  0   1  99   0   0   0|   0     0 |  52B  366B|   0     0 | 138    61 
  0   1  99   0   0   0|   0     0 |  52B  366B|   0     0 | 101    46 
  0   0 100   0   0   0|   0     0 |  52B  366B|   0     0 | 110    47 
  0   0 100   0   0   0|   0     0 |  52B  366B|   0     0 | 110    55

These last runs at 400 concurrent connections seems to stick at the end, waiting for some last requests. We can see this in the ab report.

Server Software:        nginx/1.2.1
Server Hostname:
Server Port:            80

Document Path:          /
Document Length:        172 bytes

Concurrency Level:      400
Time taken for tests:   20.397 seconds
Complete requests:      3000
Failed requests:        1175
   (Connect: 0, Receive: 0, Length: 1175, Exceptions: 0)
Write errors:           0
Non-2xx responses:      1825
Total transferred:      12837398 bytes
HTML transferred:       12277473 bytes
Requests per second:    147.08 [#/sec] (mean)
Time per request:       2719.560 [ms] (mean)
Time per request:       6.799 [ms] (mean, across all concurrent requests)
Transfer rate:          614.63 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   31 105.1      1    1076
Processing:     1  842 2772.7     82   20243
Waiting:        1  842 2772.6     81   20242
Total:          2  873 2771.4    146   20264

Percentage of the requests served within a certain time (ms)
  50%    146
  66%    194
  75%    330
  80%    347
  90%   2003
  95%   2206
  98%  13677
  99%  17262
 100%  20264 (longest request)

It doesn’t always stick, but I sense there’s a software issue here.

Regardless. For a very cheap ARM based server, this seems good. It would be worthwhile to compare to an intel setup.

Does this apply to real world needs? Well, I tweeted to @gruber asking what were his daringfireball stats the other day and he was kind enough to respond back with a set of 24 hour stats from his site which is one of the more popular blogs out there. I don’t know that he uses wordpress but regardless. He responded with this:

Presuming an evenly distributed number of requests over an hour allowing us to divide by 3600, during the busiest hour 15815, yields ~4.4 requests a second. Seems like this setup would be able to do that.

Update #1: Doing a little digging around last night, I discovered  in the 400 concurrent request range the kernel is throwing page allocation failures yet it’s not really out of memory yet. I’ve posted a question to the linaro-dev list about this to which Andy Green mentioned it might be related to a problem observed on the Raspberry Pi which seems to involve the ethernet driver. Stay tuned.

Graph : Plain vs W3 Total Cache WordPress perf on ARM

This graph shows successive numbers of concurrent connections with the number of concurrent connections per second maintained. ab, apache benchmark is used to drive traffic over a wired gig ethernet network. Plain wordpress on ARM is able to handle 8.06 connections / per second with specifying dispatching of 10 connections per second. Given the rate you can see the server is already falling behind and adding more traffic will hasten the point failure in the form of dropped connections. Turning on the W3 Total Cache, we’re able to service 70 connections per second. Once the number of concurrent connections per second goes above 70, the server starts to fall behind and the time to service requests goes up. Within the test’s time period ramping up to 130 connects per second still works as long the wait time does not get so long that it results in a dropped connection. Above 130 the wait time becomes so long connections start to drop.

Updated : links to the new home for the linaro based server image and nginx armhf debs.

In science being able to reproduce results outside of the lab is essential. I thought I would try and reproduce the performance results of this blog post about a high performance word press server using an ARM device. I’ve made updates based on Linaro images, and have prebuilt armhf debs for nginx besides a few setup things.

In my case I’m using a Panda ES which is of course a dual core cortex A-9 omap4 4460 with 1 Gig of RAM.

Let’s get started.

  1. Download the lnmp-server image from here.
  2. Boot the image
  3. apt-get update
  4. apt-get install mysql-server (but sure to set the server password and remember it!)
  5. Download nginx-common_1.2.1-0ubuntu0ppa1~precise_all.deb and nginx-full_1.2.1-0ubuntu0ppa1~precise_armhf.deb from here.
  6. dpkg -i nginx-common_1.2.1-0ubuntu0ppa1~precise_all.deb nginx-full_1.2.1-0ubuntu0ppa1~precise_armhf.deb
  7. apt-get install -f  (this will pull in various deps that nginx needs)
  8. mysql -u root -p
  9. Enter CREATE DATABASE wordpress;
    GRANT ALL PRIVILEGES ON wordpress.* TO “wp_user”@”localhost” IDENTIFIED BY “ENTER_A_PASSWORD”;
  10. apt-get install php5-fpm php-pear php5-common php5-mysql php-apc
  11. edit /etc/php5/fpm/php.ini
  12. add to the bottom
    apc.write_lock = 1
    apc.slam_defense = 0
  13. edit /etc/php5/fpm/pool.d/www.conf
  14. replace
    listen =


    listen = /dev/shm/php-fpm-www.sock
  15. and then add
    listen.owner = www-data = www-data
    listen.mode = 0660
  16. edit /etc/nginx/nginx.conf
  17. In the http section add
    port_in_redirect off;
  18. find
    # server_names_hash_bucket_size 64;

    change to

    server_names_hash_bucket_size 64;
  19. edit /etc/nginx/conf.d/drop and place the contents of this link into the file
  20. edit /etc/nginx/conf.d/default.conf and place the contents of this link into the file
  21. Within the same file, change all instances of to your appropriate domain.
  22. mkdir -p /var/www/
    chmod 775 /var/www
  23. service nginx start
  24. service php5-fpm restart
  25. cd /tmp
    tar xfz latest.tar.gz
    mv wordpress/* /var/www/
  26. cp /var/www/wp-config-sample.php /var/www/wp-config.php
  27. edit /var/www/wp-config.php
  28. Visit this link and replace the fields in file with the values produced by the web page
  29. In the same file replace the following
    define('DB_NAME', 'wordpress');
    define('DB_USER', 'wp_user');
    define('DB_PASSWORD', 'whatever you entered for a password');
  30. Visit
  31. Fill in the fields appropriately
  32. Afterwords, log in
  33. Go to settings -> permalinks, select custom structure and enter

    Then hit “Save Changes”

Now we are to a point where you can run your system by creating a first post and doing some testing with ab. I did so and at this point found the numbers weren’t that great.

Time to start to enable caches. Goto the admin page, select plugins, and then install new plugin. Search for the “W3 Total Cache” plugin and install it. After this is complete click on active plugin.

Now select the performance menu on the left on side. For all the entries, if you have an option to choose “PHP APC” do so. You’ll also need to specifically enable:

Database Cache
Object Cache

Save all settions, and the select deploy.

Again at this point you can run ab and collect performance data. I can see from my data that things are much improved and replicating nicely. Data and pretty graphs tomorrow. But I’m far from being done yet. Stay tuned!

ngnix & Calxeda

Posted: June 22, 2012 in images, linaro, open_source, server

In progress I have an update to the Linaro based server image I’ve created. It fixes a couple of notable bugs.

  1. linaro-media-create would fail due to an already installed kernel.
  2. openssh-server removed for now – the package while previously installed wouldn’t have the keys generated, so unless you knew to manually gen your keys slogin and friends would fail in unhelpful ways.

Besides the update to this lamp image, I’ve another image I’ve create which replaces apache with ngnix. Never heard of ngnix ?  Read more about it here.

Also of note Calxeda has posted ApacheBench numbers using their new chips. That can be found here.

The new lamp server image is located here.  It is as yet untested.

The ngnix based image isn’t complete.

Today I went to look to find the Linaro server image on  that I had put together last year  and well .. umm .. err … yeah not there. Now don’t get upset. Maintaining lots of different reference images takes time, effort and resources. It’s cool.

Rolling up my sleeves, I took the time to create a version of the past server image using armhf with precise. I’ve tested booted it on my panda boards. It works. Tomorrow I’m intending to run ApacheBench against it.

The live-build config is located at : lp:~tom-gall/linaro/live-helper.config.precise.server

I’ve cross built this on my intel box with the a45 version linaro’s live-build. You can too.

If you’ve never cross built an image before you can find instructions here.

Or if I have a little time tomorrow I’ll post the image somewhere so you don’t have to rebuild it.

Updated 6/19 : Server image can be downloaded from here.

ARM Server Performance

Posted: June 15, 2012 in linaro, server

One thing I’ve been giving some thought to lately is just how well can ARM hardware stand up when being used as a server? Take current Cortex A-9 hardware and do some comparisons…  well I’m glad to say others are thinking about it too. Here’s a couple of links that I think are worth your time if you have an interest in this.

I think it would be very interesting to apply some time and effort on server app performance on ARM Linux like what the Linaro Android team has done and see just how far we might be able to push the ARM performance envelope. Fun stuff.