Jan 30

Yet another costing update on a straightforward yet powerful home network storage solution.  See my original post on the overall solution for background.

A colleague of mine who also has a QNAP TS-109 Pro, told me about the availability of a less expensive solution to housing the external eSATA drive attached to the QNAP device. When I purchased my QNAP several years ago, I bought two Antec MX-1 external hard drive enclosures that support eSATA and USB. The Antec units have been working fine, but they are a bit bulky when moving one of the two drives in rotation to our offsite locoation.

My colleague’s idea is to purchase one of the new crop of external hard drive docking devices into which you simply plug your raw eSATA drive. When you want to swap the two backup drives, you do the same process as with the Antec devices, but instead of moving the external enclosure offsite, all you have to do is put the raw drive into an inexpensive carrying case – which is much smaller than the Antec and similar housings.

The end result is that instead of spending $40 USD x 2 for the Antec like enclosures, you’d spend only $30 or so for the dock and several dollars for two plastic carrying cases.

The new calculation for a QNAP single internal + 2 rotated external drives comes to:

  • TS-119: $280 USD
  • 3 x 1 TB Seagate Barracuda 7200.12: $85 each = $255
  • 1 eSATA dock: $30
  • 2 plastic drive cases: $5 each

For a total of: $575 USD

This solution with the much more powerful TS-119 unit, costs only $10 more than the solution I priced out back in August 2009.

This solution also naturally eclipses my $800+ solution of several years ago in that the TS-119 has 1 GHz CPU, a lot more memory than the TS-109 Pro and the 1 TB drives are double the size of my 500 GB drives.

Tagged with:
Sep 05

Arghhhh!  Apple has done it again.  They’ve changed the manner in which NFS shares are mounted.  Back in 10.5 Leopard, the Directory Utility was used to mount shares.  In 10.6 Snow Leopard the Disk Utility program is the place where this is done.  That wouldn’t have been so bad except for the fact that the upgrade from 10.5 to 10.6 wiped out my NFS mount definitions. Which forced me to learn this new procedure:

Fire up Disk Utility and select File -> NFS Mounts…   Doing so will lead to a very similar dialog as in the 10.5 Directory Utility.  Here’s the list of mounts on my client:

List of NFS Mounts

You can click the “+” to add a new mount. In the following case, I clicked on the pencil to edit an existing mount:

Add NFS Share

I’ve been using the “-P” advanced option to access the shares on our QNAP NAS device. See Mac OS X mount_nfs man page for details.

Adding Shares to Finder

The trick to adding a convenient folder to Finder’s Places folder is to ensure that you create a directory containing the mount points. For example, I defined my mount points under /Volumes/network/.

But you might not see /Volumes and your containment directory when you look at the hard disk in Finder. Use “command-shift-G” to bring up a dialog such that you can go directly to the directory of interest. In my case I went to /Volumes and simply dragged the “network” containment folder over to the Places folder. Voila! Now whenever I want to access my NFS shares, I simple click on the “network” folder in Finder. This is how I had it arranged in 10.5 as well.

As in 10.5, the shares will automatically be mounted as you start your system. The day-to-day experience with this approach was pretty good. Even when our MacBook Pro wasn’t attached to our home network, the machine does not experience any noticeable hangs or hiccups because of the unavailability of the NAS server.

Tagged with:
Aug 25

We’re running MySQL on our low power home server system to support both our home monitoring system based on thermd and TNG Genealogy, a PHP-based web site application.  Since the internal solid state device (SSD) that acts as the hard drive for the home server has limited capacity (8 GB), I wanted to modify the MySQL configuration to maintain the databases on our QNAP NAS server.  Doing so provides us with lots of storage space and automatic backups via our multi-disk backup scheme. The only downside of the approach might be increased latency of database operations due to the use of NFS to access the database files.

Figuring out how to modify the configuration of MySQL to stop using /var/lib/mysql/ and to use a newly mounted share was easy.  After creating a mysql account and group on the NAS server and forcing the ids to be the same as used on the home server system, I tried to start MySQL.  However, I ran into a permissions problem:

Aug 25 22:21:14 home-server mysqld_safe[24585]: started
Aug 25 22:21:14 home-server kernel: [645095.493657] audit(1251256874.809:14): type=1503 operation="inode_create" requested_mask="w::" denied_mask="w::" name="/mnt/network-disk/database/home-server.lower-test" pid=24587 profile="/usr/sbin/mysqld" namespace="default"
Aug 25 22:21:14 home-server mysqld[24588]: 090825 22:21:14 [Warning] Can't create test file /mnt/network-disk/database/home-server.lower-test
Aug 25 22:21:14 home-server kernel: [645095.497234] audit(1251256874.809:15): type=1503 operation="inode_create" requested_mask="w::" denied_mask="w::" name="/mnt/network-disk/database/home-server.lower-test" pid=24587 profile="/usr/sbin/mysqld" namespace="default"
Aug 25 22:21:14 home-server mysqld[24588]: 090825 22:21:14 [Warning] Can't create test file /mnt/network-disk/database/home-server.lower-test
Aug 25 22:21:14 home-server kernel: [645095.586722] audit(1251256874.899:16): type=1503 operation="inode_permission" requested_mask="rw::" denied_mask="rw::" name="/mnt/network-disk/database/ibdata1" pid=24587 profile="/usr/sbin/mysqld" namespace="default"
Aug 25 22:21:14 home-server mysqld[24588]: 090825 22:21:14  InnoDB: Operating system error number 13 in a file operation.
Aug 25 22:21:14 home-server mysqld[24588]: InnoDB: The error means mysqld does not have the access rights to
Aug 25 22:21:14 home-server mysqld[24588]: InnoDB: the directory.
Aug 25 22:21:14 home-server mysqld[24588]: InnoDB: File name ./ibdata1
Aug 25 22:21:14 home-server mysqld[24588]: InnoDB: File operation call: 'open'.
Aug 25 22:21:14 home-server mysqld[24588]: InnoDB: Cannot continue operation.
Aug 25 22:21:14 home-server mysqld_safe[24595]: ended

Although all of the filesystem ownership and permissions were set properly, MySQL wouldn’t start.  It turns out that there’s an “apparmor” facility in Ubuntu that has a specific profile or configuration for MySQL.  This profile specifies the files that MySQL can access.  The standard path of /var/lib/mysql/ was present, but obviously the path of my new database share was not.  Once I modified the config file under /etc/apparmor.d/ and restarted apparmor, I was able to start MySQL.

Tagged with:
Aug 25

Using today’s pricing, I recalculated the cost of our QNAP TS-109 Pro-based home storage solution:

  • QNAP TS-109 Pro II (256 MB RAM vs Pro’s 128 MB RAM): $230
  • 3 x Seagate Barracuda 7200.12 1 TB SATA 3.5″ Drives (vs 3 x 500 MB drives): $85 * 3 = $255
  • 2 x Antec MX-1 eSATA Enclosures: $40 x 2 = $80

Total = $565

Update: A colleague of mine pointed out the availability of 1 TB eSATA drive + enclosure for only $70!  It’s not clear how the drive compares in performance to the Seagate drive listed above.  Nor is it clear if the drive can be easily removed and put into the QNAP if and when the QNAP’s internal drive dies.  But the price is pretty compelling.  Taking this route would reduce the overall solution to $440. (Assuming you moved one of the 1 TB drives to the QNAP from the start).

Which gives us double the storage capacity of the original solution and double the memory on the NAS server.   Compared to the $815 spent on the original solution, the up-to-date solution saves $250!

Not bad considering the capacity improvements.

Tagged with:
Nov 05

Since I recently reconfigured our Macs to use NFS to access data housed on our home NAS server, it was time to start using NFS from Ubuntu to do the same. Functionally, the use of CIFS to access the QNAP NAS server worked without a hitch, but I knew we weren’t getting the best performance by using CIFS.  I covered the CIFS set up in an earlier blog entry.  In this example, I had to configure our new home server system to access our NAS server.

In the following excerpt from an Ubuntu system’s /etc/fstab file, “network-disk” is the host name of the NAS server.

network-disk:/share/HDA_DATA/Qdownload /mnt/network-disk/downloads nfs _netdev,auto,user 0 0
network-disk:/share/HDA_DATA/backup /mnt/network-disk/backup nfs _netdev,auto,user 0 0
Tagged with:
Oct 26

I recently acquired a MacBook Pro 15″ laptop for work.  What a nice machine!

We’ve been using SMB to connect our other Mac and Linux computers in the house to our NAS server, but I wanted to give NFS another try given that it’s much better performing.  The QNAP NAS server had a bug in that its QRAID-1 feature wouldn’t work properly when NFS is in use.  Since that issue was present in the QNAP software more than a year ago, I am crossing my fingers that they have fixed it since then.

Setting up access to NFS shares on Mac OS X has become a lot easier with 10.5 (Leopard). I used the following advice to get my MacBook set up properly:

NFS on Mac OS X 10.5

I ended up modifying my Unix user ID on the NAS server to match the ID used on the MacBook such that permissions and ownership would work properly.  As I retrofit our iMac to access the NAS server via NFS, I’ll need to rework our other user IDs too.  Using “chown -R userid:groupid” is a godsend on the NAS server to make these adjustments.

Tagged with:
Aug 06

Update: November 3, 2008: I gave up on trying to migrate my thermd deployment to the NAS server and instead assembled a new low power home server.

Continue reading »

Tagged with:
Jul 21

Updated November 5, 2008: Since I recently reconfigured our Linux clients to use NFS to access our NAS server, the following instructions no longer apply to our home network.  Although they may still be of use to you.

Although I started out using NFS to access the files housed on our QNAP TS-109 Pro NAS server, I ran into a QNAP bug that forced me to revert to using CIFS (the new and fancy name for SMB).  Since I wanted to automatically mount the network shares, I had to do some investigation to figure out the best way to do it.  I also wanted to perform the auto mounting from both my Ubuntu workstation and my Ubuntu laptop.  Of course with the laptop, it’s often not connected to our home network. Thus the solution needed to be flexible and not too bothersome when I’m traveling with the laptop.

The following approach has worked like a charm for the past 9 months or so.  It’s great  that when I’m using my laptop, these mounts don’t adversely affect boot up or ongoing operations when the home network is not available. Continue reading »

Tagged with:
Jan 04

I’ve been experimenting with the Q-RAID 1 feature of the QNAP TS-109 NAS server in an attempt to better understand how long it will take for an external drive to sync up with the NAS server’s single internal drive. One scenario outlined in our home network storage and backup strategy is the case in which two external drives are rotated on a monthly basis between the NAS server and an off-site storage location. This is done to avoid a complete loss of our data in case of a major home disaster or theft.

The rotation approach is simple: We disconnect the external drive from the NAS server, take it to the off-site location, retrieve the other external drive, bring it home and connect it to the NAS server. Continue reading »

Tagged with:
Jan 02

Several of my colleagues are using the same TS-109 NAS server as we are using at home. We recently discussed how you can access the user data from a mirrored Q-RAID1 external drive after the drive has been ejected and disconnected from the NAS server. For example, let’s say that the NAS’ internal drive experienced a severe error and you want to install a new replacement internal drive. As opposed to trying to rely on the Q-RAID1 sync process, can you just copy the content from the Q-RAID1 external drive to the newly installed internal drive? Continue reading »

Tagged with:
preload preload preload