fbpx
1-888-310-4540 (main) / 1-888-707-6150 (support) info@spkaa.com
Select Page

Network Storage – Getting the most from your filesystem

windchill features best plm software
Written by Mike Solinap
Published on March 10, 2011

In my last blog entry, I mentioned that I would discuss how to roll your own network attached storage device. At first, this might sound trivial. Take any commodity PC hardware, throw a large disk in there, install linux, configure NFS, done. Not so fast, there are numerous considerations that must be taken into account in order to have a secure, reliable server that performs well.

This week I’ll be focusing on what I believe to be one of the most important considerations when building a network attached storage server — the filesystem.

Most modern filesystems today have enough features to suit our needs. A system administrator would typically want to be able to do the following:

  • Easily resize the filesystem
  • Reliably recover the filesystem in the event of a system crash
  • Keep filesystem performance at a consistent level
  • Not worry about disk fragmentation
  • Maximize usable disk space

As network consultants, we provide network management services for clients who are in need of IT infrastructure solutions. At one of our clients however, we came across a special set of requirements. The client captures network data on the order of about 20 gigabytes per day. This data then gets parsed and inserted into a postgres database. At 20GB per day, the storage requirements based on their retention period are huge. This presents two problems. Network captures are highly compressible. If only there was a way to store these captures in a compressed state transparently, that way users would not need to spend time doing this separately. Secondly, with such a large database, how can a backup be taken consistently, in a reasonable amount of time?

ZFS filesystemLuckily, ZFS came to our rescue. ZFS is a filesystem developed by Sun, but unfortunately due to a conflict between GPL and CDDL licenses, a linux kernel based ZFS port has not been released yet. Some progress has been made by the http://zfsonlinux.org/ project, but I’m not sure it’s production ready yet however. Some of ZFS’ most powerful features include:

  • Storage pools (Similar to LVM)
  • Transparent compression (gzip and zlib)
  • Snapshots
  • Deduplication

The snapshot feature played an important part in backing up the large postgres database. To get all data files into a consistent state, previously the only way to do that was to shutdown the database completely. Then you could copy the files off to another server or off to tape. With over several terabytes of data however, this would mean hours of downtime. With snapshots on the other hand, the database remains running, and all files are consistent. To the database, this appears as a crash, and if restoring from a snapshot, the database will use crash recovery to come back online. Depending on how your application does transactions, this might not be acceptable.

The transparent compression feature was equally important. A 3U server that we had available supported (8) 3.5″ drives, for a total of 16TB raw capacity. With network captures as the main data source, the client could expect upwards of 25TB of usable compressed space. With 3TB drives becoming more common, the amount of potential space available in a 3U footprint is becoming a bigger value.

Unfortunately, these “free” features really do come at a price. For instance, if you are primarily a linux shop, then running FreeBSD or OpenSolaris to get ZFS may not be feasible. Also, to take advantage of transparent compression, you will need a more powerful file server than typically required. But if you can deal with these small limitations, ZFS provides a wealth of benefits.

Subscribe to our blog to keep informed on server storage solutions and other areas of IT Infrastructure.

Michael Solinap
Sr. Systems Integrator, SPK

Latest White Papers

The Hybrid-Remote Playbook

The Hybrid-Remote Playbook

Post-pandemic, many companies have shifted to a hybrid or fully remote work environment. Despite many companies having fully remote workers, many still rely on synchronous communication. Loom offers a way for employees to work on their own time, without as many...

Related Resources

How Model-Based Definition (MBD) Cuts ECOs by 41% and Scrap by 47%

How Model-Based Definition (MBD) Cuts ECOs by 41% and Scrap by 47%

Organizations are increasingly turning to Model-Based Definition (MBD) to revolutionize their engineering and manufacturing processes. By embedding rich, digital annotations directly into 3D models, MBD provides a single source of truth for product definitions. This...

OKR and Agile: Harmonizing Strategic Goals with Agile Methodologies

OKR and Agile: Harmonizing Strategic Goals with Agile Methodologies

Objectives and Key Results (OKRs) and Agile methodologies like Scrum, Kanban, and SAFe are powerful frameworks designed to boost productivity and keep teams aligned. OKRs drive strategic goal-setting and measurable outcomes, while Agile approaches like Scrum focus on...