Author Archives: shainmiley

MP3 and H.264 playback with chromium

Recently I noticed that I was unable to play certain types of audio and video files directly from within Chromium,  I am using Chromium version 10.0.648.133 (77742) on Ubuntu 10.10.  It seems that due to the various licensing issues surrounding the codecs required to playback some of these media types, they are not supported without installing some extra packages.

In order to get MP3 playback support up and running you will need to install the necessary software package using the following command:

 apt-get install chromium-codecs-ffmpeg-extra

After a quick browser restart, you should be able to enjoy MP3 playback on Chromium.

There is a similar process required in order to get MP4 and H.264 playback enabled, however this time you will need to install the following instead:

 apt-get install chromium-codecs-ffmpeg-nonfree

Once again after a quick browser restart, you should be able to enjoy MP4 and H.264 playback on Chromium.

DYI NAS: Part1

A few weeks I decided to do some research into what it would take to build a NAS unit that would act as a storage server for all of my digital assets (audio, video, images, etc). I had purchased a 500GB external HD from Bestbuy about a year ago, however recently I was having problem reading the drive from my Linux laptop (since then I have not had any problems reading that same drive from my new MacBook Pro).

That was enough to scare me into investing a little bit more time and money into something that provided a higher level of  fault tolerance,  and might also lead to some more restful nights as well.

HARDWARE:

My initial requirements were not super hefty, I knew I wanted the following:

1)relatively small form factor case
2)a unit that consumed relativity little power
3)would allow scaling up to at least 4 drives
4)a 64 bit CPU and a motherboard that would handle at least 4GB of RAM
5)a setup that would allow me to use ZFS as the backend filesystem

After doing some initial research, I came across the case that I thought would be perfect for this build, the Chenbro ES34169. This case fit several of the requirements, it was small, I could use a mini-itx motherboard and it provided a backplane for at least four hot swappable 3.5 inch hard drives.

After finding a case that I was fairly sure I was going to use, I set out to find a motherboard that would allow for at least 4 SATA devices and work well with with OpenSolaris, EON, Nexenta or FreeNAS.

Initially I really liked the GA-D525TUD from GIGABYTE.  It  has 4 SATA ports, can handle up to 4 GB of RAM, has a built in Intel Atom D525, and was priced very reasonably priced at right about $100.  The one huge downside of this motherboard was the onboard Realtek NIC.  I came across several posts (here and here for example) that indicated that reliability issues existed with these types of NIC’s, and that I was better of using an Intel chipset instead.  Since this server was mainly going to be used as a file server, network performance and  reliability were imperative, so I was going to try and avoid a motherboard with a Realtek NIC if possible.

I also found the JNC98-525E-LF from JetWay, this motherboard had a lot of the same appeal of the GIGABYTE motherboard, however it also had an HDMI port, a DVI port and analog audio outputs as well.  I think this motherboard would be a good pick if I was building a media server instead of strictly a storage server . This motherboard also used the Realtek chipset for networking as well, so I decided to continue my search.

When everything was all said and done, I went with the MBD-X7SPA-H-O from Supermicro.  This board has 6 SATA ports, allowed 4GB of RAM, came with a 64-bit processor, an on board USB port (which would allow me to hide my boot device inside the case itself) and most importantly had two Intel based network cards.  This motherboard was a bit pricey at right around $200, however I guess you are paying extra for the extra SATA ports and the extra NIC.

I found 4 GB of cheap Kingston RAM, so the only thing left hardware wise was to decide what type of hard drives I would purchase and how many.  I decided that I would use two WD20EADS 2TB hard drives from Western Digital.

The only real issue that I ran into with this setup was the fact that there are very few mini-itx motherboards that support ECC RAM, which is a must have for enterprise level storage setups, however I was not willing to spend the extra money for an enterprise level mini-itx board which sells for about $1000.  My other option was to scrap my plans for a really small form factor mini-itx based setup and go with something bigger like an micro-atx or regular atx motherboard, where I am sure I would have an easier time finding ECC support.

I decided that I would give up the ECC RAM option in order to gain the benefits that come with a much smaller machine.

SOFTWARE:

I went back and fourth between using EON, OpenSolaris and FreeNAS.  I liked EON because it has a very small footprint, it could be installed on a USB flash drive, it was based on OpenSolaris and had a stable ZFS implementation.  The downside of using EON for this project is that it would require a bit more expertise to configure and administer.

I have a good amount of OpenSolaris experience, and they have obviously the most stable ZFS implementation that exists, but it think that it is overkill for this machine, and I am also not a big fan of what Oracle is doing right now in terms of the open source community, so I decided to look a little closer at what FreeNAS had to offer.

FreeNAS is a FreeBSD based NAS distro, that has a very nice web interface that allows you to configure almost all aspects of the server from any web browser.  Research indicated that they had a stable and reliable ZFS port in place, I would be able to install and boot the OS on my usb flash drive, and if I got hit by a bus, one of my family members would have a better chance of being able to figure out how to retrieve the data…so FreeNAS it was.

In Part 2 of my post I plan to provide some more details about overall power use and network performance.

Proxmox 2.0 feature list

Martin Maurer sent an email to the Proxmox users mailing list detailing some of the features that we can expect from the next iteration of Proxmox VE. Martin expects that the first public beta release of the 2.x branch will be ready for use sometime around the second quarter of this year.

Here are some of the highlights currently slated for this release:

  • Complete new GUI
    • based on Ext JS 4 JavaScript framework
    • fast search-driven interface, capable of handling hundreds and probably thousands of VM´s
    • secure VNC console, supporting external VNC viewer with SSL support
    • role based permission management for all objects (VM´s, storages, nodes, etc.)
    • Support for multiple authenication sources (e.g. local, MS ADS, LDAP, …)
  • Based on Debian 6.0 Squeeze
    • longterm 2.6.32 Kernel with KVM and OpenVZ as default
    • second kernel branch with 2.6.x, KVM only
  • New cluster communication based on corosync, including:
    • Proxmox Cluster file system (pmcfs): Database-driven file system for storing configuration files, replicated in realtime on all nodes using corosync
    • creates multi-master clusters (no single master anymore!)
    • cluster-wide logging
    • basis for HA setup´s with KVM guests
  • RESTful web API
    • Ressource Oriented Architecture (ROA)
    • declarative API definition using JSON Schema
    • enable easy integration for third party management tools
  • Planned technology previews (CLI only)
    • spice protocol (remote display system for virtualized desktops)
    • sheepdog (distributed storage system)
  • Commitment to Free Software (FOSS): public code repository and bug tracker for the 2.x code base
    • Topics for future releases
      • Better resource monitoring
      • IO limits for VM´s
      • Extend pre-built Virtual Appliances downloads, including KVM appliances

    Recursive search and copy while keeping the directory structure intact.

    I recently needed to write a script that would search for a certain pattern in a file name and then copy that file from one directory to another.  If you use the ‘find’ command with the standard parameters you will end up with all the files matching the pattern, being placed into a single folder.

     In this case I needed the find command to maintain the directory structure (and create the folders if necessary) once a file matching the pattern was found.

    The key to making this happen was to use the ‘–parent’ flag with find.  Here is an example of the command I ended up using:

     find . -wholename "*search/pattern*" -exec cp -p --parent '{}' /new/folder/ ';'

    Updated Native Linux ZFS benchmarks

    Phornix.com just released some updated numbers from benchmarks they took using the recently released GA version of the native ZFS kernel module for Linux. They conducted a total of 10 tests using the ZFS kernel module, Ext4, Btrfs and XFS.

    The tests were performed using Ubuntu 10.10 and kernel version 2.6.35 for the ZFS tests,  kernel version 2.6.37 was used when testing the other three filesystems.

    It appears that these tests were all run using single disk setups, I think it would be really great if Phornix would also look into providing benchmarks on multi-disk setups such as ZFS mirrored disks vs hardware or software RAID1 on Linux. I would also like to see benchmarks comparing RAID5 on Linux vs RAIDZ on ZFS.  I think these kinds of tests might provide a more realistic comparison of real world enterprise level storage configurations.

    SUNWattr_ro error:Permission denied on OpenSolaris using Gluster 3.0.5

    Last week I noticed an apparently obscure error message in my glusterfsd logfile. I was getting errors similar to this:

    [2011-01-15 18:59:45] E [compat.c:206:solaris_setxattr] libglusterfs: Couldn’t set extended attribute for /datapool/glusterfs/other_files (13)
    [2011-01-15 18:59:45] E [posix.c:3056:handle_pair] posix1: /datapool/glusterfs/other_files: key:SUNWattr_ro error:Permission denied

    on several directories as well as on the files that resided underneath those directories. These errors only occurred when an attempt was made by Gluster to stat the file or directory (ls -l vs ls) in question.

    After reviewing the entire logfile, I was unable to see any real pattern to the error messages, the errors were not very widespread given that I was only seeing these one maybe 75 or so files out of our total 3TB of data.

    A google search yielded very few results on the topic, with or without Gluster as a search term. What I was able to find out was this:

    SUNWattr_ro and SUNWattr_rw are Solaris ‘system extended attributes’, these attributes cannot be removed from a file or directory, you can however prevent users from being able to set them at all, by setting xattr=off, either during the creation of the zpool or changing the parameter after the fact.

    This was not a viable solution for me due to the fact that several of Gluster’s translators require extended attributes be enabled on the underling filesystem.

    I was able to list the extended attributes using the following command:

    user@solaris1# touch test.file
    user@solaris1# runat test.file ls -l
    total 2
    -r–r–r– 1 root root 84 Jan 15 11:58 SUNWattr_ro
    -rw-r–r– 1 root root 408 Jan 15 11:58 SUNWattr_rw

    I also learned that some people were having problems with these attributes on Solaris 10 systems, this is due to the fact that the kernels that are used by those versions of Solaris do not include, nor do they understand how to translate these ‘system extended attributes’, that were introduced in new versions of Solaris . This has caused a headache for some people who have been trying to share files between Solaris 10 and Solaris 11 based servers.

    In the end, the solution was not overly complex, I had to recursively copy the directories to a temporary location, delete the original folder and rename the new one:

    (cp -r folder folder.new;rm -rf folder;mv folder.new folder)

    These commands must be done from a Gluster client mount point, so that Gluster can set or reset the necessary extended attributes.

    Native Linux ZFS kernel module and stability.

    UPDATE: If you are interested in ZFS on linux you have two options at this point:

    I have been actively following the  zfsonlinux project because once stable and ready it should offer surperior performance due to the extra overhead that would be incurred by using fuse with the zfs-fuse project.

    You can see another one of my posts concerning zfsonlinux here.

    ————————————————————————————————————————————————————-

    There was a question posted in response to my previous blog post found here, about the stability of the native Linux ZFS kernel module release. I thought I would just make a post out of my response:

    So far I have been able to perform some limited testing (given that the GA code was just released earlier this week), some time ago I had been given access to the beta builds,  so I had done some initial testing using those, I configured two mirrored vdevs consisting of two drives each. It seemed relatively stable as far as I was concerned, as I stated in my previous post…there is a known issue with the ‘zfs rollback’ command…which I tested using the GA release,  and I did in fact have problems with.

    The work around at this point seems to be to perform a reboot after the rollback and then a ‘zfs scrub’ on the pool after the reboot. Personally I am hoping this gets fixed soon, because not everyone has the same level of flexibility, when it comes to rebooting their servers and storage nodes.

    As far as I understand it, this module really consists of three pieces:

    1)SPL -  a Linux kernel module which provides many of the Solaris kernel APIs. This layer makes it possible to run Solaris kernel code in the Linux kernel with relatively minimal modification.
    2)ZFS – a Linux kernel module which provides a fully functional and stable SPA, DMU, and ZVOL layer.
    3)LZFS – a Linux kernel module which provides the necessary POSIX layer.

    Pieces #1 and #2 have been available for a while and are derived from code taken from the ZFS on Linux project found here. The folks at KQ Infotech are really building on that and providing piece #3, the missing POSIX layer.

    Only time will tell how stable the code really is, my opinion at this point is that most software projects have some number of known bugs that exist (and even more have some unknown number of bugs as well), I know I am going to continue to test in a non production environment for the next few months.  At this point I have not experienced any instability (other then what was discussed above) or crashing, all the commands seem to work as advertised, there are a lot of features I have not been able to test yet, such as dedup, compression, etc, so there is lots more to look at in the upcoming weeks.

    KQStor’s business model seems to be one where the source code is provided and support is charged for.  So far I have been able to have an open and productive dialog with their developers, and they have been very responsive to my inquiries, however it does not appear that they are going to be setting public tools such as mailing lists or forums, due to their current business model.  I am hoping that this will change in the near future, as I truly believe that everyone will be able to benefit from those kinds of public repositories, and there is no doubt in my mind that such tools will only lead to a more stable product in the long run.

    Native Linux ZFS kernel module goes GA.

    UPDATE: If you are interested in ZFS on linux you have two options at this point:

    I have been actively following the  zfsonlinux project because once stable and ready it should offer superior performance due to the extra overhead that would be incurred by using fuse with the zfs-fuse project.

    You can read more about using zfsonlinux in another one of my posts here.

    ————————————————————————————————————————————————————-
    Earlier this week  KQInfotech released the latest latest build of their ZFS kernel modules for Linux. This version has been labeled GA and ready for wider testing (and maybe ready for production).

    KQStor has been setup as a place where you can go to sign-up for an account, download the software and get additional support.

    The source code for the module can be found here:

    https://github.com/zfs-linux

    Currently mounting of the root filesystem is not supported, however a post here, describes a procedure that can be used to do it.

    The users guide also hints at possible problems using ‘zfs rollback’ under certain circumstances.  I have asked for more specific information on this issue, and I will pass along any other information I can uncover.

    After looking around the various mailing lists, this looks like it might be an issue that exists with zfs-fuse, and thus the current version of the kernel module as well, since they share a lot of the same code.

    Installation and usage:

    Installation of the module is fairly simple, I downloaded the pre-packaged .deb packages for Ubuntu 10.10 server.

    root@server1:/root/Deb_Package_Ubuntu10.10_2.6.35-22-server# dpkg -i *.deb

    If all goes well you should be able to list the loaded modules:

    root@server1:/root/Deb_Package_Ubuntu10.10_2.6.35-22-server# lsmod |grep zfs
    lzfs                   36377  3
    zfs                   968234  1 lzfs
    zcommon                42172  1 zfs
    znvpair                47541  2 zfs,zcommon
    zavl                    6915  1 zfs
    zlib_deflate           21866  1 zfs
    zunicode              323430  1 zfs
    spl                   116684  6 lzfs,zfs,zcommon,znvpair,zavl,zunicode

    Now I can create a test pool:

    root@server1:/root#zpool create test-mirror mirror sdc sdd

    Now check the status of the zpool:

    root@server1:/root# zpool status
    pool: test-mirror
    state: ONLINE
    scan: none requested
    config:
    
    NAME        STATE     READ WRITE CKSUM
    test-mirror  ONLINE    0     0     0
    mirror-0  ONLINE       0     0     0
    sdc1   ONLINE          0     0     0
    sdd1   ONLINE          0     0     0

    Gluster on OpenSolaris so far…part 1.

    We have been running Gluster in our production environment for about 1 month now, so I figured I would post some details about our setup and our experiences with Gluster and OpenSolaris so far.

    Overview:

    Currently we have a 2 node Gluster cluster, we are using the replicate translator in order to provide Raid-1 type mirroring of the filesystem.  The initial requirements involved providing  a solution that would house our digital media archive (audio, video, etc), would scale up to around 150TB, support exports such as CIFS and NFS, and be extremely stable.

    It was decided that we would use ZFS as our underlying filesystem, due to it’s data integerity features as well as it’s support for taking filesystem snapshots, both considered very high on the requirement list for this project as well.

    Although FreeBSD has had ZFS support for quite some time, there were some known issues (with 32 vs 64 bit inode numbers) at the time of my research that prevented us from going that route.

    Just this week  KQstor released their native ZFS kernel module for Linux, which as of this latest release is supposed to fully support extended filesystem attributes, these are requirement in order for Gluster to function properly.  This software was Beta at the time,  and did not support extended attributes, so we were unable to consider and/or test this configuration either.

    The choice was then made to go with ZFS on OpenSolaris (2008.11 specifically due to the 3ware drivers available at the time).  Currently there is no FUSE support under Solaris, so although you can use it without a problem on the server side,  if you choose to use a Solaris variant for your storage nodes,  you will be required to use a head node with an OS that does support it on the client side.

    The latest version of Gluster to be fully supported on the Solaris platform is version 3.0.5. 3.1.x introduced some nice new features, however we will have to either port our storage nodes to Linux, or wait until the folks at Gluster decide to release 3.1.x for Solaris (which I am not sure will happen anytime soon).

    Here is the current hardware/software configuration:

    • 2 x Intel Xeon E5410 @ 2.33GHz:CPU
    • 32 GB DDR2 DIMMS:RAM
    • 48 X 2TB Western Digital SATA II:HARD DRIVES
    • 2 x 3WARE 9650SE-24M8 PCIE:RAID CONTROLLER
    • Opensolaris version 2008.11
    • Glusterfs version 3.0.5
    • Samba version 3.2.5 (Gluster1)

    ZFS Setup:

    Setup for the two OS drives was pretty straight forward, we created a two disk mirrored rpool.  This will allow us to have a disk failure in the root pool and still be able to boot the system.

    Since we have 48 disks to work with for our data pool, we created a total of 6 Raid-z2 vdevs, each consisting of 7 physical disks.  This setup gives up 75TB of space (53TB usable) per node, while leaving 6 disks available to use as spares.

    user@server1:/# zpool list
    NAME       SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    rpool     1.81T  19.6G  1.79T     1%  ONLINE  -
    datapool  75.8T  9.01T  66.7T    11%  ONLINE  -
    

    Gluster setup:

    Creating the Gluster .vol configuration files is easily done via the glusterfs-volgen command:

    user1@host1:/#glusterfs-volgen --name cluster01 --raid 1 server1.hostname.com:/data/path server2.hostname.com:/data/path

    That command will produce 2 volume files, one is called ‘glusterfsd.vol’ used on the server side and one called ‘glusterfs.vol’ used on the client.

    Starting glusterd on the serverside is straightforward:

    user1@host1:/# /usr/glusterfs/sbin/glusterfsd

    Starting gluster on the client side is straightforward as well:

    user1@host2:/#/usr/glusterfs/sbin/glusterfs --volfile=/usr/glusterfs/etc/glusterfs/glusterfs.vol /mnt/glusterfs/

    In a later blog post I plan to talk more about issues that we have encountered running this specific setup in a production environment.

    More native Linux ZFS benchmarks

    Phoronix has published a nice 5 page article, which includes some in-depth file system benchmarks. They tested file systems such as Btrfs, Ext4, Xfs, ZFS-Fuse and the ZFS kernel module from KQ Infotech.

    Here is an excerpt taken from the conclusion section of the article:

    “In terms of our ZFS on Linux benchmarks if you have desired this Sun-created file-system on Linux, hopefully it is not because of the performance expectations for this file-system. As these results illustrate, this ZFS file-system implementation for Linux is not superior to the Linux popular file-systems like EXT4, Btrfs, and XFS. There are a few areas where the ZFS Linux disk performance was competitive, but overall it was noticeably slower than the big three Linux file-systems in a common single disk configuration. That though is not to say ZFS on Linux will be useless as the performance is at least acceptable and clearly superior to that of ZFS-FUSE. More importantly, there are a number of technical merits to the ZFS file-system that makes it one of the most interesting file-systems around.”

    With that being said…I believe that a lot of times when people are choosing to use ZFS as an underlying filesystem for a project, they are not doing so due to it’s reputation as a wonderfully fast file system.  ZFS features such as data integrity, large capacity, snapshotting and deduplication are more likely going to drive your rational for using ZFS as part of your backend storage solution.

    Another thing to note about these benchmarks  is that these tests were run on the beta version of the kernel module, and I assume that once the GA version (and source code) is released, there will be plenty of opportunities to try and mitigate some of these concerns as much as possible, however on the other hand you are going to have to live with some of the overhead that comes with using ZFS if you want to take advantage of it’s large feature set.