Click here to close now.




















Welcome!

Containers Expo Blog Authors: Liz McMillan, Nicholas Lee, Pat Romanski, Carmen Gonzalez, Trevor Parsons

Related Topics: Linux Containers, Open Source Cloud, Containers Expo Blog

Linux Containers: Article

ZFS on Linux

How ZFS on Linux compares to ZFS on Illumos or FreeBSD

On March 27, 2013, ZoL maintainers announced that the 0.6.1 release was ready for wide scale deployment on everything from desktops to servers. Yet, due to lack of maturity and adoption of the ZoL project, maintainers and/or advocates of ZFS aren't comfortable to run ZoL in production yet.

The reason behind the reluctance to use ZoL in production is that ZFS on Solaris took a large number of years to reach maturity and went through ups and downs of bugs related to data corruption and other issues. In the same way, ZoL will need some time to mature as product. It will take about a year to be mature as more people deploy it in production. If developers want to take advantage of ZFS, they can start rolling out less important database servers (i.e. reporting servers, 3rd slave databases) into production and experience the product for about 6 months before rolling out to all database servers. This will give users the confidence and experience to work with ZoL. Alternatively, developers may want to run ZFS on OmniOS because it's been battle tested for decades now.

How ZFS on Linux Compares to ZFS on Illumos or FreeBSD
The implementation of ZFS on Linux when compared to running ZFS on Illumos or FreeBSD is not very different from the perspective of the system administrator. The management and general usage is nearly identical. The only differences are OS specific functionality. For example, on FreeBSD if a user wants to use a zvol for swap space, he/she sets the org.freebsd:swap=on property on the zvol to turn swap on. On Linux, a developer would create a vanilla zvol and set up swap like any other partition with mkswap and swapon. Under the latest versions of all three operating systems mentioned, the Zpool version is at the same level, which is to say based on zpool v28 with additional features added by way of feature flags. They are compatible, users can create a zpool on Illumos/OmniOS, use it, export it, move the disks to a FreeBSD server, import the zpool, use it, export it, move to a Linux server, import the zpool, use it, etc. This exact scenario is something we have done at OmniTI and it worked without a hitch. One issue however is that the ACL support/usability is different on each OS so you'd the user will likely have to clean up the permissions a bit.

Caveats for Running ZFS on Arch Linux in a Production Environment
ZFS under Arch Linux is not part of the main package repository. As ZFS and its utilities are maintained by a third party, developers must rely on the third party to keep the packages up to date. One issue is that every time a new kernel is released (frequently) the ZFS kernel modules must be rebuilt as well. If the company upgrades its system (pacman -Syu) and reboots, but the ZFS modules were not recompiled well, Zpools will not initialize. This becomes especially important when developers have the rootfs under ZFS since this would leave the system unbootable and the user would be forced to recover by means of Rescue CD or, in the case of AWS, moving the EBS volumes to another instance and recovering from there. Linux does have a mechanism for automating this process other than DKMS. However, the arch zfs-modules-dkms package that provides this functionality is not kept up to date, and shouldn't be used.

Also, as briefly mentioned above, it should be noted that one cannot boot directly from ZFS on Linux, users must maintain a Linux bootloader compatible file system for /boot such as ext[234].

Currently, many of the utilities that output information about the filesystem are not ZFS aware and developers can get strange results running commands such as "df" for example, since it does not know the relationship between datasets and their parents. These will not necessarily prevent anything from running, but it is worth noting. Generally it's best to use "zfs list" rather than "df" to get accurate results.

ZFS natively uses NFSv4-style ACLs and is not compatible with Posix-style ACLs. Any applications that rely on Posix-style ACLs will have issues. Default GNU utilities like "ls" for example are not NFSv4 ACL aware.

Lastly, the ZoL project proclaims that ZFS on Linux is production ready, however it is worthwhile to note that it is still very immature at this point. ZFS itself has been around and tested for quite some time and is mature, so be careful and test before using it in a production environment.

How To Install ZFS on Arch with RootFS on ZFS
The Arch Wiki page for Installing Linux on ZFS goes into great detail on how to install Linux on ZFS. The key points are as follows:

  • The ZFS utilities and kernel modules must be built/installed prior to beginning the installation (within the CD Boot environment)
  • Even though you can have the Root FS on ZFS, the Linux bootloaders cannot load the kernel from ZFS currently so you still need a small ext2/3 partition for /boot to hold the kernel, the initramfs, and files that the bootloader requires.
  • There is no "beadm" in Linux to support multiple Boot Environment snapshots currently. One of the benefits of ZFS on Illumos/OmniOS is the ability to rollback to an earlier boot environment when applying updates.
  • When building the initramfs image, the zfs hook must come before the filesystems hook and you should not use the fsck hook at all.
  • You need to enable the ZFS service in systemd as this is not enabled by default. Under a ZFS Root system, this is very important if you like your systems to boot.
  • The "kernel" line of the bootloader needs to include a parameter telling the kernel where the root FS resides. For example, if the root FS is on a Zpool named "rpool" and its dataset is rpool/ROOT/default, then this parameter would be zfs=rpool/ROOT/default.

It is important to remember to export the zpool prior to rebooting after installation otherwise ZFS will complain that the system is different and will not import itself. This is because the new system is, in fact, a different system than the CD Media boot environment. Also, it's a very good idea to rebuild the initramfs (mkinitcpio -p linux) right away once you log into the installed system for the first time to avoid any "pool may be in use" errors due to differences in the CD Media Boot environment when the ramdisk was created initially.

Included at the end of this article are portions of a script used to build ArchLinux on ZFS. The only parts that have been removed are things that are specific to my environment. This is given as an example only to illustrate the steps that can be used, but note that it may or may not match the methods typically used for your environment.

Referencesz
ZFS on Linux Main Page: http://zfsonlinux.org

Arch Wiki - Installing Linux on ZFS: https://wiki.archlinux.org/index.php/Installing_Arch_Linux_on_ZFS

Arch Wiki - ZFS: https://wiki.archlinux.org/index.php/ZFS

Appendix A - Excerpts from Vagrant install script

The following are the commands we use when installing ZFS on ArchLinux under Vagrant. The Vagrant specific bits have been removed as they would not apply for installation on a production server. The full script can be found here: https://github.com/Loki22/scripts/blob/master/Vagrant/archzfs_vagrant_install.sh

pacman -Syy

pacman -S --noconfirm base-devel

mkdir /root/build

cd /root/build

wget https://aur.archlinux.org/packages/sp/spl-utils/spl-utils.tar.gz

wget https://aur.archlinux.org/packages/sp/spl/spl.tar.gz

wget https://aur.archlinux.org/packages/zf/zfs-utils/zfs-utils.tar.gz

wget https://aur.archlinux.org/packages/zf/zfs/zfs.tar.gz

for i in spl-utils spl zfs-utils zfs

do

cd /root/build && tar zxvf ${i}.tar.gz

cd /root/build/${i}

makepkg -s --asroot --noconfirm && pacman -U --noconfirm ./${i}*.pkg.tar.xz

done

# Install packages needed for ZFS

pacman -S --noconfirm archzfs dosfstools gptfdisk

# Clear the disk and initialize in GPT Format

sgdisk -o -g /dev/sda

# Partitioning - 3 Partitions (BIOS Boot Partition, /boot, and ZFS)

sgdisk -n 2:2048:+512M -c 2:"Linux Boot Partition" -t 2:8300 /dev/sda

sgdisk -n 3:0:0 -c 3:"ZFS Root Pool" -t 3:bf00 /dev/sda

sgdisk -n 1:34:2047 -c 1:"BIOS Boot Partition" -t 1:ef02 /dev/sda

# Create filesystem for /boot partition

mkfs.ext4 -L BOOT /dev/sda2

# Set up the ZFS Root Pool

modprobe zfs

zpool create rpool /dev/sda3

zfs set checksum=fletcher4 rpool

zfs set atime=off rpool

zfs set compression=lzjb rpool

zfs set mountpoint=none rpool

zpool export rpool

zpool import -d /dev/disk/by-id -R /mnt rpool

# Set up the initial BE (linux doesn't have beadm at this point, but not a bad idea to think ahead)

zfs create rpool/ROOT

zfs create -o mountpoint=/ rpool/ROOT/default

zpool set bootfs=rpool/ROOT/default rpool

# Set up datasets that are not part of the BE

zfs create -o mountpoint=/home -o setuid=off rpool/home

zfs create -o mountpoint=/root -o setuid=off rpool/roothome

# Create swap (example here is 2GB, use 4K block size for 64 bit systems)

zfs create -V 2G -b 4K rpool/swap

mkswap -Lswap -f /dev/rpool/swap

swapon /dev/rpool/swap

# Mount /boot

mkdir /mnt/boot

mount /dev/sda2 /mnt/boot

# Change ZFS repo to core now that we have it installed. This is so the new system will use updated modules linked to the new kernel as opposed to the somewhat more stale kernel that is used on the Install CD.

sed -i 's/demz-repo-archiso/demz-repo-core/' /etc/pacman.conf

# Bootstrap the new installation

pacstrap /mnt base base-devel archzfs sudo gnupg vim

# Generate the fstab minus the ZFS bits of which mounting is handled by ZFS

genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab

# Configuration

CHROOT="arch-chroot /mnt"

# Hostname

echo "myhostname" > /mnt/etc/hostname

# Timezone and Clock

ln -s /usr/share/zoneinfo/America/New_York /mnt/etc/localtime

hwclock --systohc --utc

# Locale

sed -i 's/^#\(en_US.*\)/\1/' /mnt/etc/locale.gen

$CHROOT locale-gen

echo 'LANG="en_US.UTF-8"' > /mnt/etc/locale.conf

# Keymap

echo "KEYMAP=us" > /mnt/etc/vconsole.conf

# Mkinitcpio

sed -i 's/^\(HOOKS.*\)filesystems keyboard fsck/\1keyboard zfs filesystems/' /mnt/etc/mkinitcpio.conf

$CHROOT mkinitcpio -p linux

# Enable ZFS at boot

$CHROOT systemctl enable zfs.service

# Install GRUB

$CHROOT pacman -S --noconfirm grub-bios

modprobe dm-mod

$CHROOT grub-install --target=i386-pc --recheck --debug /dev/sda

cp /mnt/usr/share/locale/en\@quot/LC_MESSAGES/grub.mo /mnt/boot/grub/locale/en.mo

mv /mnt/boot/grub/grub.cfg /mnt/boot/grub/grub.cfg.orig

cat > /mnt/boot/grub/grub.cfg <<EOF

set timeout=2

set default=0

# (0) Arch Linux

menuentry "Arch Linux" {

set root=(hd0,2)

linux /vmlinuz-linux zfs=rpool/ROOT/default

initrd /initramfs-linux.img

}

# (1) Arch Linux (fallback)

menuentry "Arch Linux - Fallback" {

set root=(hd0,2)

linux /vmlinuz-linux zfs=rpool/ROOT/default

initrd /initramfs-linux-fallback.img

}

EOF

# SSH

$CHROOT pacman -S --noconfirm openssh

ln -s '/usr/lib/systemd/system/sshd.service' \

'/mnt/etc/systemd/system/multi-user.target.wants/sshd.service'

# Networking on installed system

# Manual linking because systemd isn't running yet

# Run 'ip link' to check the network interface and make sure it's enp0s3

ln -s '/usr/lib/systemd/system/dhcpcd@.service' \

'/mnt/etc/systemd/system/multi-user.target.wants/[email protected]'

# Clean up

# Remove downloaded packages

$CHROOT pacman -Scc --noconfirm

# Set your root password

passwd root

# Unmount filesystems, change ZFS mountpoints, and reboot

umount /mnt/boot

zfs umount -a

zpool export rpool

echo "If there were no errors, it would now be safe to reboot into the new system."

Appendix B - Recovery process if ZFS modules are not rebuilt on kernel upgrade
As mentioned above, the ZFS modules need to be rebuilt on every kernel upgrade. If this isn't done, you need to recover from a rescue environment. The recovery process (assuming booting from CD) is to build the ZFS modules/utils from the AUR (spl-utils, spl, zfs-utils, and zfs) in the temporary rescue environment, loading the ZFS module, mounting the Zpool under /mnt, mounting the /boot FS at /mnt/boot, chrooting, building the ZFS modules/utils again against the kernel in the chroot environment, rebuilding initramfs (mkinitcpio -p linux), and rebooting. Needless to say, not fun while people are screaming at you because the production server is down. This problem will be alleviated at some point when the ZFS packages are adopted into the main repositories and maintained with the rest of the release process.

More Stories By Kevin Loukinen

Kevin Loukinen is Site Reliability Engineer at OmniTI. Prior to that, he worked both as a Systems Administrator and Network Administrator for more than 12 years across several industries (financial, government and telecommunications).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Manufacturing connected IoT versions of traditional products requires more than multiple deep technology skills. It also requires a shift in mindset, to realize that connected, sensor-enabled “things” act more like services than what we usually think of as products. In his session at @ThingsExpo, David Friedman, CEO and co-founder of Ayla Networks, will discuss how when sensors start generating detailed real-world data about products and how they’re being used, smart manufacturers can use the data to create additional revenue streams, such as improved warranties or premium features. Or slash...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advanced analytics, and DevOps to advance innovation and increase agility. Specializing in designing, imple...
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be.
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
With the proliferation of connected devices underpinning new Internet of Things systems, Brandon Schulz, Director of Luxoft IoT – Retail, will be looking at the transformation of the retail customer experience in brick and mortar stores in his session at @ThingsExpo. Questions he will address include: Will beacons drop to the wayside like QR codes, or be a proximity-based profit driver? How will the customer experience change in stores of all types when everything can be instrumented and analyzed? As an area of investment, how might a retail company move towards an innovation methodolo...
Contrary to mainstream media attention, the multiple possibilities of how consumer IoT will transform our everyday lives aren’t the only angle of this headline-gaining trend. There’s a huge opportunity for “industrial IoT” and “Smart Cities” to impact the world in the same capacity – especially during critical situations. For example, a community water dam that needs to release water can leverage embedded critical communications logic to alert the appropriate individuals, on the right device, as soon as they are needed to take action.
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing these ideas and some early experiments performed in the Kurento open source software community in areas ...
While many app developers are comfortable building apps for the smartphone, there is a whole new world out there. In his session at @ThingsExpo, Narayan Sainaney, Co-founder and CTO of Mojio, will discuss how the business case for connected car apps is growing and, with open platform companies having already done the heavy lifting, there really is no barrier to entry.
As more intelligent IoT applications shift into gear, they’re merging into the ever-increasing traffic flow of the Internet. It won’t be long before we experience bottlenecks, as IoT traffic peaks during rush hours. Organizations that are unprepared will find themselves by the side of the road unable to cross back into the fast lane. As billions of new devices begin to communicate and exchange data – will your infrastructure be scalable enough to handle this new interconnected world?
The Internet of Things is in the early stages of mainstream deployment but it promises to unlock value and rapidly transform how organizations manage, operationalize, and monetize their assets. IoT is a complex structure of hardware, sensors, applications, analytics and devices that need to be able to communicate geographically and across all functions. Once the data is collected from numerous endpoints, the challenge then becomes converting it into actionable insight.
SYS-CON Events announced today that IceWarp will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IceWarp, the leader of cloud and on-premise messaging, delivers secured email, chat, documents, conferencing and collaboration to today's mobile workforce, all in one unified interface
SYS-CON Events announced today that Micron Technology, Inc., a global leader in advanced semiconductor systems, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Micron’s broad portfolio of high-performance memory technologies – including DRAM, NAND and NOR Flash – is the basis for solid state drives, modules, multichip packages and other system solutions. Backed by more than 35 years of technology leadership, Micron's memory solutions enable the world's most innovative computing, consumer,...
With the Apple Watch making its way onto wrists all over the world, it’s only a matter of time before it becomes a staple in the workplace. In fact, Forrester reported that 68 percent of technology and business decision-makers characterize wearables as a top priority for 2015. Recognizing their business value early on, FinancialForce.com was the first to bring ERP to wearables, helping streamline communication across front and back office functions. In his session at @ThingsExpo, Kevin Roberts, GM of Platform at FinancialForce.com, will discuss the value of business applications on wearable ...
As more and more data is generated from a variety of connected devices, the need to get insights from this data and predict future behavior and trends is increasingly essential for businesses. Real-time stream processing is needed in a variety of different industries such as Manufacturing, Oil and Gas, Automobile, Finance, Online Retail, Smart Grids, and Healthcare. Azure Stream Analytics is a fully managed distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. It features built-in integration with Azur...
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on demos and comprehensive walkthroughs.
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device access to health records while reducing operating costs and complying with government regulations.
Containers are not new, but renewed commitments to performance, flexibility, and agility have propelled them to the top of the agenda today. By working without the need for virtualization and its overhead, containers are seen as the perfect way to deploy apps and services across multiple clouds. Containers can handle anything from file types to operating systems and services, including microservices. What are microservices? Unlike what the name implies, microservices are not necessarily small, but are focused on specific tasks. The ability for developers to deploy multiple containers – thous...