Category Archives: Data Center

Why Zerto?

This is the third installment of a series by Systems Team Lead Matt Peabody to begin to answer a question he hears all the time: “Why vendor X?”

We are going to make a change from the originally planned “Why VMware?” and talk about a newer partner with whom we are working.  Zerto is a Disaster Recovery product built for virtualization.  We had quite a few customers asking us how they could close the gap for their data loss in the event of a total failure.  Many of them were relying on offsite backups and realized the time to restore all of their data would be a large loss of productivity.  We had looked at Continuous Data Protection backup products before and were largely unimpressed with the complexity, and we didn’t like the performance hits the VMs took when they were being protected.  We found Zerto after talking with a few of our customers who had just deployed it to protect their virtual infrastructures.

We did our normal testing of the product and found there were a few things that stood out for Zerto as a leader in Business Continuity.

1. Deployment
During our initial conversations with customers and throughout our testing, we found the deployment to be one of the simplest installs.  We rant through the install on a Windows VM, deployed the replication appliances with a few clicks and had a replication infrastructure ready to go in about 30 minutes.  We then created a few protection groups and watched our VMs replicating in real time to our DR site.  This was all done without any reboots to hosts and no downtime to install guest agents in our VMs.

2. Automation
Zerto can protect VMs in Virtual Protection Groups (VPG). These groups are similar to VMware’s VApp entities.  The VMs in a VPG are all kept in sync and make up a business application.  This allows for single click testing and automated recovery when needed for DR purposes.  Since Zerto can keep all the VMs in sync across hosts and is WAN friendly, the business critical apps can be recovered extremely quickly and to points in time down to the second.  Testing of the groups is a few clicks away and will keep the production workload from going down.  VMs can have IPs changed in software so layer 2 networks don’t have to be stretched across WAN links.

3. Fault Tolerance
Many of the complaints we found with CDP products focused on what happened when the network or backup repository was offline.  With in-guest agents, many of them built up queues of data, filling up disk space, slowing down performance and bottlenecking memory.  With Zerto, they handle all of these gracefully.  Each hosts has its own small replication proxy, which listens to the SCSI stream from the VMs it is protecting.  If the WAN connectivity is failing, those VMs build up a queue on their disks, not affecting the production application VMs’ performance.  They can also recover from long outages with ease by reprotecting a VM and only sending over changes from the last protected point in time.  Since the management architecture is distributed across datacenters, failure of one side does not impact the protection or recovery of the protected VMs.

4. Hardware Agnostic
With array-based replication, a customer needs nearly identical hardware in production and DR.  This cost was not an option for many customers that had a single datacenter and a smaller remote DR site.  With Zerto, the replication can happen from an array based production cluster to different disparate hosts with local storage, different arrays or to the cloud.  Since the replication happens above the array in the hypervisor, DR becomes easier and older hardware can be reused, rather than thrown out.  With all the options for targets, DR becomes a commodity rather than an expensive, unused datacenter.

We have been very happy with the results Zerto has shown in the Business Continuity space.  They help our customers close the gap for their DR from days down to minutes.  Next will definitely be “Why VMware?”

Rydell Data Center: The Finished Product

Ever wondered what HPN employees do on weekends? This is it. This spring, High Point Networks worked with Rydell Auto Center in Grand Forks, ND to design and build a new Data Center for their business, executing a twelve hour cutover on a Saturday night.

HPN engineers worked with Rydell to design their wiring, power, cooling and monitoring systems, approaching it from the standpoint of not only accommodating for their needs today, but also their needs for tomorrow.

After months of planning and twelve hours of cutover executed by two network engineers, a system engineer, and a cabling engineer, this is the finished product.

Why Veeam?

This is the second installment of a series by Systems Team Lead Matt Peabody to begin to answer a question he hears all the time: “Why vendor X?”

One of our account managers had been helping to look for a backup product as our primary offering.  He brought Veeam to us over 4 years ago and was really excited about the product.  I was one of the engineers installing Veeam for customers and was managing it internally for our own data protection once we verified it was a good fit for us. Over the years, our knowledge of the product continues to expand, and we have seen overwhelming success for our customers using the product.

Backup products are plentiful, and the list of companies offering backup continues to grow.  There are a few things, however, that separate Veeam from the competition:

1. Setup
Veeam’s install has always been extremely easy to walk through.  They continue to improve the process, and the latest install is nearly “Click install, next, next, finish.”  From there, it usually takes us a few minutes to configure where to back up, what to back up and when to back up the data.  There is much planning involved to get to this point, but once we have the information we need, the set up process is always a breeze.

2. Performance
Veeam’s scale out architecture allows it to grow into our largest customers.  We can easily add more repositories if we need more space and more proxies if we need more network or CPU throughput.  Since we can eliminate single points of failure and throughput bottlenecks, we have shrunk backup windows for many of our customers from hours to minutes or even multiple days to hours.  Many of our customers utilize iSCSI arrays, and tapping into the SAN fabric with a Veeam server for backups greatly decreases load on the network and production infrastructure, further lessening the impact of backups.

3. Backup Testing
Whenever we talk to customers about their backup solutions we always ask if they have ever tested their restores. The answer is usually that their backup product told them the backup was successful and they didn’t assume otherwise.  After working with many customers through many incidents, High Point Networks has adopted the mentality that a backup is not complete until a restore has been tested.  Veeam’s SureBackup automates the testing process and uses their Instant Restore feature to turn on a live VM from the backup file and test to make sure all the services start. This guarantees the recovery of the files in a backup.

4. Restore
Many backup products back data up easily enough, but Veeam excels at restoring data too.  They have multiple ways to restore data, ranging from an Instant Restore of the entire VM, to a single file, all the way down to item-level (email, calendar appointment) recovery for Exchange.  Their Explorer wizards greatly improve the experience of restoring advanced items in different scenarios, and the user experience is just like browsing the backup using Outlook or the SharePoint management interface.  The restores are quick to get data back into production, and Veeam continues to improve their user experience.

Veeam is an excellent product and is extremely easy to set up to demo for yourself.  We rely on it in our data protection plan internally at High Point Networks, and will continue to recommend it as a primary backup solution to our customers.   Next, I’ll be answering “Why VMware?”

HPN Guides School Districts Through E-Rate Process

For years, school districts and libraries have been augmenting their telecommunication budgets with funds provided by the Universal Service Fund through the E-Rate program.  In 2014, the program was modernized to include internal connections under Priority 2.  This modernization funds school districts’ and libraries’ efforts to modernize their wired and wireless connections in proportion to their free and reduced lunch (FRL) student population.

The new funding formula provides $150 per student over five years multiplied by the organization’s FRL ratio. For example, if a district’s FRL ratio is 8 out of 10 students – or 80% – and the district has 10,000 students, it is eligible for up to $120,000 over 5 years (10,000 X $150.00 X .80). The district will need to contribute $30,000 to receive the $120,000 in this example.  These funds are available one time during the five year period, either all at once or distributed over the course of five years. Most districts are applying for their portion in the first year due to uncertainties about the programs funding over the 5 years.

A district needs to begin the process by filing a Form 470 stating their intent to procure Priority 2 funds for an internal project. This form allows vendors to bid for that project. Because districts are only required to abide by their purchasing policies, this is not necessarily an RFP process. As school districts choose a preferred vendor, they submit a Form 471 by the E-Rate deadline, April 16, 2015.  Once the Universal Service Administrative Company returns an intent to fund letter, the work can be scheduled.

Most districts choose a consultant to help them navigate these new and complex waters. This program provides opportunity for districts that have traditionally not been able to upgrade their technology due to financial or staffing constraints.  It also presents a challenge in deciding on the new technology to be used and how to implement it.  This is where a Value Added Reseller (VAR) like High Point Networks comes in.  HPN has been helping school districts improve their infrastructures for over ten years.

We have both the experience and engineering staff to successfully design, implement and support a variety of internal installations. The new E-Rate rules allow for a dizzying array of options to help students make the best use of the technology. Our staff brings their many years of school district experience to bear in designing a solution tailored to each individual district’s needs. We then implement that solution, train staff in its day to day operation, and also back it up with our own support staff. Whether it involves wireless or wired networks, unified communications, server storage or security, High Point Networks is looking forward to partnering with more school districts to enable the success of students and staff in our communities.

Why Nimble?

This is the first installment of a series by Systems Team Lead Matt Peabody to begin to answer a question he hears all the time: “Why vendor X?”

I’m kicking off this series with a vendor we have been supporting since they had relatively low name recognition, Nimble Storage.  Nimble Storage is a hybrid SAN vendor that combines spinning disk for write capacity and flash for read acceleration.

Nimble Storage was brought to us by one of their sales engineers that had worked with us in the past.  He described their product as the next big thing in storage.  After discussing the architecture, feature set, roadmap, and history of Nimble Storage with that sales engineer, we were sold.

There were a few things that stood out for us and continue to be items that distance Nimble Storage from other vendors:

  1. CASL (Cache Accelerated Sequential Layout)
    Nimble Storage’s CASL architecture was built from the ground up to take advantage of the shift in CPU architectures to multiple cores and not rely on disk spindles for speed.  This is the first architecture we had seen where the array was not spindle-bound, but CPU bound.  This allows Nimble Storage to use slower spinning disks for writes and use all of the flash in their array for read performance.  The file system design also helps them to take extremely thin, non-impacting snapshots, migrate hot data to flash in real time, and use commodity hardware, all without sacrificing performance.  These all combine to make a very affordable and reliable array with extremely fast response times.
  2. InfoSight
    Infosight is a Big Data cloud that collects real time information and coalesces it into easy-to-read and understandable reports about the health of all the arrays that Nimble Storage has ever sold.  When customers ask how well the arrays actually perform, we can show people real-world performance statistics on existing installs and get them in touch with existing Nimble Storage customers.  We as partners rely on this data as much as our customers to recommend upgrades and ensure the health of our installs.
  3.  Support
    A recent customer had an issue with a failover during a new install.  Where other vendors may have quit citing an issue with the Fiber Channel infrastructure, Nimble had us pull logs and send that data to them.  After reading through the logs, they found the switches did not support the proper revision in Fiber Channel specification.  This was added to Nimble Storage’s code and will be addressed in the next firmware release.  Nimble Storage also aggregates the analytics from their entire install base into InfoSight to give customers proactive warnings if they will run into a known issue, and then recommends a fix for them.  Nimble Storage support continues to impress both our engineers and our customers.
  4.  Simplicity
    Nimble Storage keeps things simple.  Their pricing is array(s) + maintenance.  Maintenance is a simple percentage of the array and gets customers support, new firmware, hardware replacement, and any new features that come out in newer code versions.  Their arrays come in small configuration bundles that are easy to understand.   Nimble Storage’s GUI and command line management is intuitive and easy to use.  When we show most customers the GUI, they reply with “That’s it?” and that’s a question we like to hear.

Nimble Storage continues to be a strong partner for us at High Point Networks.  Now that I’ve answered “Why Nimble?” we’ll continue next time with “Why Veeam?”

Array Performance Getting You Down? InfoSight Might Have the Answer.

In case you missed it, Nimble was named a “visionary” for the second consecutive year in Gartner’s November 20, 2014 Magic Quadrant for General-Purpose Disk Arrays report. Gartner’s report positioned Nimble for the “completeness” of its vision and its ability to execute on that vision.

We, at Nimble, believe that a key reason we were positioned as a visionary by Gartner is InfoSight™, the engine that monitors all Nimble arrays collectively and individually from the cloud. InfoSight comes free with every Nimble support contract and has allowed our customers to enjoy greater than 99.999 percent uptime, the gold standard for system availability.

InfoSight is the brainchild of Larry Lancaster, our chief data scientist, a self-proclaimed data nerd and one of Nimble’s resident visionaries. We have a few around here, among them cofounder Umesh Maheshwari, the creator of CASL™, our Cache Accelerated Sequential Layout architecture, the foundation of every Nimble solution, and the reason our arrays can intelligently allocate flash as applications need it.

I wrote about Larry last September in this blog, Nimble Storage’s Chief Data Scientist is the Wizard of the Next Big Thing. Larry joined Nimble in 2011, charged with applying his Big Data expertise to the problem of storage management. Larry created InfoSight, and built it into Nimble’s first arrays. The timing is important: The more data InfoSight gathers, the smarter it becomes. To date, InfoSight has gathered the equivalent of thousands of years of real-world usage, allowing it to foresee problems long before they can bring systems down. And, to recommend specific fixes.

Here are some real-life examples of the ways in which InfoSight has helped Nimble customers to keep arrays running in peak working condition:

  • InfoSight’s data protection planning capabilities provided the customer with the bandwidth requirements needed for replication on a per volume basis. Now the customer knows that, he is replicating, on average, approximately 105Mbps of data. That bit of information allows his team to make a choice: To forego replication for non-essential data, or increase WAN bandwidth.
  • InfoSight’s actionable upgrade recommendations helped improve sluggish performance. InfoSight determined the customer was experiencing CPU saturation and recommended an optimization change that could reduce the bottleneck. Eventually, upgraded his hardware to optimize the environment.
  • InfoSight gave a customer the needed information to resolve a bandwidth issue for data replication. The problem sounded complex: The customer has several volumes, and directions and arrays replicating to and from two arrays of interest, and replication traffic was sharing bandwidth with other traffic. InfoSight determined the required bandwidth to ensure timely replication.

Without InfoSight, some of these problems would likely have taken hours, days – even weeks – to identify and resolve. The time spent on fixes would have cost those businesses countless dollars in lost productivity. For Nimble’s customers, InfoSight has become a valuable member of their IT team; a storage expert that comes free of charge with every Nimble array and won’t drink all the coffee in the break room.

By Matt Miller, InfoSight Product Marketing