Click here to close now.




















Welcome!

Containers Expo Blog Authors: Pat Romanski, Liz McMillan, Elizabeth White, Don MacVittie, SmartBear Blog

Related Topics: Linux Containers, Open Source Cloud, Containers Expo Blog

Linux Containers: Article

ZFS on Linux

How ZFS on Linux compares to ZFS on Illumos or FreeBSD

On March 27, 2013, ZoL maintainers announced that the 0.6.1 release was ready for wide scale deployment on everything from desktops to servers. Yet, due to lack of maturity and adoption of the ZoL project, maintainers and/or advocates of ZFS aren't comfortable to run ZoL in production yet.

The reason behind the reluctance to use ZoL in production is that ZFS on Solaris took a large number of years to reach maturity and went through ups and downs of bugs related to data corruption and other issues. In the same way, ZoL will need some time to mature as product. It will take about a year to be mature as more people deploy it in production. If developers want to take advantage of ZFS, they can start rolling out less important database servers (i.e. reporting servers, 3rd slave databases) into production and experience the product for about 6 months before rolling out to all database servers. This will give users the confidence and experience to work with ZoL. Alternatively, developers may want to run ZFS on OmniOS because it's been battle tested for decades now.

How ZFS on Linux Compares to ZFS on Illumos or FreeBSD
The implementation of ZFS on Linux when compared to running ZFS on Illumos or FreeBSD is not very different from the perspective of the system administrator. The management and general usage is nearly identical. The only differences are OS specific functionality. For example, on FreeBSD if a user wants to use a zvol for swap space, he/she sets the org.freebsd:swap=on property on the zvol to turn swap on. On Linux, a developer would create a vanilla zvol and set up swap like any other partition with mkswap and swapon. Under the latest versions of all three operating systems mentioned, the Zpool version is at the same level, which is to say based on zpool v28 with additional features added by way of feature flags. They are compatible, users can create a zpool on Illumos/OmniOS, use it, export it, move the disks to a FreeBSD server, import the zpool, use it, export it, move to a Linux server, import the zpool, use it, etc. This exact scenario is something we have done at OmniTI and it worked without a hitch. One issue however is that the ACL support/usability is different on each OS so you'd the user will likely have to clean up the permissions a bit.

Caveats for Running ZFS on Arch Linux in a Production Environment
ZFS under Arch Linux is not part of the main package repository. As ZFS and its utilities are maintained by a third party, developers must rely on the third party to keep the packages up to date. One issue is that every time a new kernel is released (frequently) the ZFS kernel modules must be rebuilt as well. If the company upgrades its system (pacman -Syu) and reboots, but the ZFS modules were not recompiled well, Zpools will not initialize. This becomes especially important when developers have the rootfs under ZFS since this would leave the system unbootable and the user would be forced to recover by means of Rescue CD or, in the case of AWS, moving the EBS volumes to another instance and recovering from there. Linux does have a mechanism for automating this process other than DKMS. However, the arch zfs-modules-dkms package that provides this functionality is not kept up to date, and shouldn't be used.

Also, as briefly mentioned above, it should be noted that one cannot boot directly from ZFS on Linux, users must maintain a Linux bootloader compatible file system for /boot such as ext[234].

Currently, many of the utilities that output information about the filesystem are not ZFS aware and developers can get strange results running commands such as "df" for example, since it does not know the relationship between datasets and their parents. These will not necessarily prevent anything from running, but it is worth noting. Generally it's best to use "zfs list" rather than "df" to get accurate results.

ZFS natively uses NFSv4-style ACLs and is not compatible with Posix-style ACLs. Any applications that rely on Posix-style ACLs will have issues. Default GNU utilities like "ls" for example are not NFSv4 ACL aware.

Lastly, the ZoL project proclaims that ZFS on Linux is production ready, however it is worthwhile to note that it is still very immature at this point. ZFS itself has been around and tested for quite some time and is mature, so be careful and test before using it in a production environment.

How To Install ZFS on Arch with RootFS on ZFS
The Arch Wiki page for Installing Linux on ZFS goes into great detail on how to install Linux on ZFS. The key points are as follows:

  • The ZFS utilities and kernel modules must be built/installed prior to beginning the installation (within the CD Boot environment)
  • Even though you can have the Root FS on ZFS, the Linux bootloaders cannot load the kernel from ZFS currently so you still need a small ext2/3 partition for /boot to hold the kernel, the initramfs, and files that the bootloader requires.
  • There is no "beadm" in Linux to support multiple Boot Environment snapshots currently. One of the benefits of ZFS on Illumos/OmniOS is the ability to rollback to an earlier boot environment when applying updates.
  • When building the initramfs image, the zfs hook must come before the filesystems hook and you should not use the fsck hook at all.
  • You need to enable the ZFS service in systemd as this is not enabled by default. Under a ZFS Root system, this is very important if you like your systems to boot.
  • The "kernel" line of the bootloader needs to include a parameter telling the kernel where the root FS resides. For example, if the root FS is on a Zpool named "rpool" and its dataset is rpool/ROOT/default, then this parameter would be zfs=rpool/ROOT/default.

It is important to remember to export the zpool prior to rebooting after installation otherwise ZFS will complain that the system is different and will not import itself. This is because the new system is, in fact, a different system than the CD Media boot environment. Also, it's a very good idea to rebuild the initramfs (mkinitcpio -p linux) right away once you log into the installed system for the first time to avoid any "pool may be in use" errors due to differences in the CD Media Boot environment when the ramdisk was created initially.

Included at the end of this article are portions of a script used to build ArchLinux on ZFS. The only parts that have been removed are things that are specific to my environment. This is given as an example only to illustrate the steps that can be used, but note that it may or may not match the methods typically used for your environment.

Referencesz
ZFS on Linux Main Page: http://zfsonlinux.org

Arch Wiki - Installing Linux on ZFS: https://wiki.archlinux.org/index.php/Installing_Arch_Linux_on_ZFS

Arch Wiki - ZFS: https://wiki.archlinux.org/index.php/ZFS

Appendix A - Excerpts from Vagrant install script

The following are the commands we use when installing ZFS on ArchLinux under Vagrant. The Vagrant specific bits have been removed as they would not apply for installation on a production server. The full script can be found here: https://github.com/Loki22/scripts/blob/master/Vagrant/archzfs_vagrant_install.sh

pacman -Syy

pacman -S --noconfirm base-devel

mkdir /root/build

cd /root/build

wget https://aur.archlinux.org/packages/sp/spl-utils/spl-utils.tar.gz

wget https://aur.archlinux.org/packages/sp/spl/spl.tar.gz

wget https://aur.archlinux.org/packages/zf/zfs-utils/zfs-utils.tar.gz

wget https://aur.archlinux.org/packages/zf/zfs/zfs.tar.gz

for i in spl-utils spl zfs-utils zfs

do

cd /root/build && tar zxvf ${i}.tar.gz

cd /root/build/${i}

makepkg -s --asroot --noconfirm && pacman -U --noconfirm ./${i}*.pkg.tar.xz

done

# Install packages needed for ZFS

pacman -S --noconfirm archzfs dosfstools gptfdisk

# Clear the disk and initialize in GPT Format

sgdisk -o -g /dev/sda

# Partitioning - 3 Partitions (BIOS Boot Partition, /boot, and ZFS)

sgdisk -n 2:2048:+512M -c 2:"Linux Boot Partition" -t 2:8300 /dev/sda

sgdisk -n 3:0:0 -c 3:"ZFS Root Pool" -t 3:bf00 /dev/sda

sgdisk -n 1:34:2047 -c 1:"BIOS Boot Partition" -t 1:ef02 /dev/sda

# Create filesystem for /boot partition

mkfs.ext4 -L BOOT /dev/sda2

# Set up the ZFS Root Pool

modprobe zfs

zpool create rpool /dev/sda3

zfs set checksum=fletcher4 rpool

zfs set atime=off rpool

zfs set compression=lzjb rpool

zfs set mountpoint=none rpool

zpool export rpool

zpool import -d /dev/disk/by-id -R /mnt rpool

# Set up the initial BE (linux doesn't have beadm at this point, but not a bad idea to think ahead)

zfs create rpool/ROOT

zfs create -o mountpoint=/ rpool/ROOT/default

zpool set bootfs=rpool/ROOT/default rpool

# Set up datasets that are not part of the BE

zfs create -o mountpoint=/home -o setuid=off rpool/home

zfs create -o mountpoint=/root -o setuid=off rpool/roothome

# Create swap (example here is 2GB, use 4K block size for 64 bit systems)

zfs create -V 2G -b 4K rpool/swap

mkswap -Lswap -f /dev/rpool/swap

swapon /dev/rpool/swap

# Mount /boot

mkdir /mnt/boot

mount /dev/sda2 /mnt/boot

# Change ZFS repo to core now that we have it installed. This is so the new system will use updated modules linked to the new kernel as opposed to the somewhat more stale kernel that is used on the Install CD.

sed -i 's/demz-repo-archiso/demz-repo-core/' /etc/pacman.conf

# Bootstrap the new installation

pacstrap /mnt base base-devel archzfs sudo gnupg vim

# Generate the fstab minus the ZFS bits of which mounting is handled by ZFS

genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab

# Configuration

CHROOT="arch-chroot /mnt"

# Hostname

echo "myhostname" > /mnt/etc/hostname

# Timezone and Clock

ln -s /usr/share/zoneinfo/America/New_York /mnt/etc/localtime

hwclock --systohc --utc

# Locale

sed -i 's/^#\(en_US.*\)/\1/' /mnt/etc/locale.gen

$CHROOT locale-gen

echo 'LANG="en_US.UTF-8"' > /mnt/etc/locale.conf

# Keymap

echo "KEYMAP=us" > /mnt/etc/vconsole.conf

# Mkinitcpio

sed -i 's/^\(HOOKS.*\)filesystems keyboard fsck/\1keyboard zfs filesystems/' /mnt/etc/mkinitcpio.conf

$CHROOT mkinitcpio -p linux

# Enable ZFS at boot

$CHROOT systemctl enable zfs.service

# Install GRUB

$CHROOT pacman -S --noconfirm grub-bios

modprobe dm-mod

$CHROOT grub-install --target=i386-pc --recheck --debug /dev/sda

cp /mnt/usr/share/locale/en\@quot/LC_MESSAGES/grub.mo /mnt/boot/grub/locale/en.mo

mv /mnt/boot/grub/grub.cfg /mnt/boot/grub/grub.cfg.orig

cat > /mnt/boot/grub/grub.cfg <<EOF

set timeout=2

set default=0

# (0) Arch Linux

menuentry "Arch Linux" {

set root=(hd0,2)

linux /vmlinuz-linux zfs=rpool/ROOT/default

initrd /initramfs-linux.img

}

# (1) Arch Linux (fallback)

menuentry "Arch Linux - Fallback" {

set root=(hd0,2)

linux /vmlinuz-linux zfs=rpool/ROOT/default

initrd /initramfs-linux-fallback.img

}

EOF

# SSH

$CHROOT pacman -S --noconfirm openssh

ln -s '/usr/lib/systemd/system/sshd.service' \

'/mnt/etc/systemd/system/multi-user.target.wants/sshd.service'

# Networking on installed system

# Manual linking because systemd isn't running yet

# Run 'ip link' to check the network interface and make sure it's enp0s3

ln -s '/usr/lib/systemd/system/dhcpcd@.service' \

'/mnt/etc/systemd/system/multi-user.target.wants/[email protected]'

# Clean up

# Remove downloaded packages

$CHROOT pacman -Scc --noconfirm

# Set your root password

passwd root

# Unmount filesystems, change ZFS mountpoints, and reboot

umount /mnt/boot

zfs umount -a

zpool export rpool

echo "If there were no errors, it would now be safe to reboot into the new system."

Appendix B - Recovery process if ZFS modules are not rebuilt on kernel upgrade
As mentioned above, the ZFS modules need to be rebuilt on every kernel upgrade. If this isn't done, you need to recover from a rescue environment. The recovery process (assuming booting from CD) is to build the ZFS modules/utils from the AUR (spl-utils, spl, zfs-utils, and zfs) in the temporary rescue environment, loading the ZFS module, mounting the Zpool under /mnt, mounting the /boot FS at /mnt/boot, chrooting, building the ZFS modules/utils again against the kernel in the chroot environment, rebuilding initramfs (mkinitcpio -p linux), and rebooting. Needless to say, not fun while people are screaming at you because the production server is down. This problem will be alleviated at some point when the ZFS packages are adopted into the main repositories and maintained with the rest of the release process.

More Stories By Kevin Loukinen

Kevin Loukinen is Site Reliability Engineer at OmniTI. Prior to that, he worked both as a Systems Administrator and Network Administrator for more than 12 years across several industries (financial, government and telecommunications).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of profound change in the industry.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect their organization.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
For IoT to grow as quickly as analyst firms’ project, a lot is going to fall on developers to quickly bring applications to market. But the lack of a standard development platform threatens to slow growth and make application development more time consuming and costly, much like we’ve seen in the mobile space. In his session at @ThingsExpo, Mike Weiner, Product Manager of the Omega DevCloud with KORE Telematics Inc., discussed the evolving requirements for developers as IoT matures and conducted a live demonstration of how quickly application development can happen when the need to comply wit...
Converging digital disruptions is creating a major sea change - Cisco calls this the Internet of Everything (IoE). IoE is the network connection of People, Process, Data and Things, fueled by Cloud, Mobile, Social, Analytics and Security, and it represents a $19Trillion value-at-stake over the next 10 years. In her keynote at @ThingsExpo, Manjula Talreja, VP of Cisco Consulting Services, discussed IoE and the enormous opportunities it provides to public and private firms alike. She will share what businesses must do to thrive in the IoE economy, citing examples from several industry sectors.
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Opening Keynote at 16th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, d...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, described how to revolutionize your archit...
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities.
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee.
Akana has released Envision, an enhanced API analytics platform that helps enterprises mine critical insights across their digital eco-systems, understand their customers and partners and offer value-added personalized services. “In today’s digital economy, data-driven insights are proving to be a key differentiator for businesses. Understanding the data that is being tunneled through their APIs and how it can be used to optimize their business and operations is of paramount importance,” said Alistair Farquharson, CTO of Akana.
Business as usual for IT is evolving into a "Make or Buy" decision on a service-by-service conversation with input from the LOBs. How does your organization move forward with cloud? In his general session at 16th Cloud Expo, Paul Maravei, Regional Sales Manager, Hybrid Cloud and Managed Services at Cisco, discusses how Cisco and its partners offer a market-leading portfolio and ecosystem of cloud infrastructure and application services that allow you to uniquely and securely combine cloud business applications and services across multiple cloud delivery models.
The enterprise market will drive IoT device adoption over the next five years. In his session at @ThingsExpo, John Greenough, an analyst at BI Intelligence, division of Business Insider, analyzed how companies will adopt IoT products and the associated cost of adopting those products. John Greenough is the lead analyst covering the Internet of Things for BI Intelligence- Business Insider’s paid research service. Numerous IoT companies have cited his analysis of the IoT. Prior to joining BI Intelligence, he worked analyzing bank technology for Corporate Insight and The Clearing House Payment...
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society-changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his session at @ThingsExpo, Jason Mondanaro, Director, Product Management at Metanga, discussed how you can plan to cooperate, partner, and form lasting all-star teams to change the world and it starts with business models and monetization strategies.
In his keynote at 16th Cloud Expo, Rodney Rogers, CEO of Virtustream, discussed the evolution of the company from inception to its recent acquisition by EMC – including personal insights, lessons learned (and some WTF moments) along the way. Learn how Virtustream’s unique approach of combining the economics and elasticity of the consumer cloud model with proper performance, application automation and security into a platform became a breakout success with enterprise customers and a natural fit for the EMC Federation.
"Optimal Design is a technology integration and product development firm that specializes in connecting devices to the cloud," stated Joe Wascow, Co-Founder & CMO of Optimal Design, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
SYS-CON Events announced today that CommVault has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. A singular vision – a belief in a better way to address current and future data management needs – guides CommVault in the development of Singular Information Management® solutions for high-performance data protection, universal availability and simplified management of data on complex storage networks. CommVault's exclusive single-platform architecture gives companies unp...
Electric Cloud and Arynga have announced a product integration partnership that will bring Continuous Delivery solutions to the automotive Internet-of-Things (IoT) market. The joint solution will help automotive manufacturers, OEMs and system integrators adopt DevOps automation and Continuous Delivery practices that reduce software build and release cycle times within the complex and specific parameters of embedded and IoT software systems.
"ciqada is a combined platform of hardware modules and server products that lets people take their existing devices or new devices and lets them be accessible over the Internet for their users," noted Geoff Engelstein of ciqada, a division of Mars International, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
Internet of Things is moving from being a hype to a reality. Experts estimate that internet connected cars will grow to 152 million, while over 100 million internet connected wireless light bulbs and lamps will be operational by 2020. These and many other intriguing statistics highlight the importance of Internet powered devices and how market penetration is going to multiply many times over in the next few years.
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.