‘WON’T CATCH US WITH OUR FLASH PANTS DOWN’ SAYS NETAPP

Thank you for using rssforward.com! This service has been made possible by all our customers. In order to provide a sustainable, best of the breed RSS to Email experience, we've chosen to keep this as a paid subscription service. If you are satisfied with your free trial, please sign-up today. Subscriptions without a plan would soon be removed. Thank you!

NetApp is concerned with storage form government of peep caches in app servers – declare a Project Mercury display during a FAST ’11 discussion during San Jose in February.

 

A Reg commenter forked this out, as did a NetApp chairman responding to a Dell server peep story, observant there’s “no throwing us with peep pants down”.

 

The thought was to couple a peep cache in a server to a shared, centrally-managed storage form so which practical machines (VMs) and- their storage could be changed in between earthy servers in a shared-pool datacentre though losing any I/O speed-up benefits from carrying a internal peep cache.

 

NetApp devised a Mercury block-oriented, write-through peep cache as a KVM/QEMU retard motorist in a Linux guest VM. It supposing an hg hoop format and- requests sent to a hg device were hand-ed over to a SSD cache.

 

The cache was “warmed” (loaded) with a couple of days’ wake up and- afterwards NetApp engineers looked during a server I/O effects. There was a “nearly 40 per cent rebate in meant I/O use time” with a “near 50 per cent rebate of requests sent to [the] server.” Almost all a reads were serviced from a Mercury cache.

 

Serial I/O showed a tiny alleviation since rand-om I/O had a estimable improvement.

Server peep cache or storage form flash?

 

These were measurements of server I/O with and- though a Mercury cache. We do not know what a goods would be if a server with/without Mercury cache was compared to a cached server with/without a storage form with a Flash Cache in a controller and- a storage form with/without SSDs as a expostulate tier.

 

It seems expected which a Mercury-cached server would have a little review I/O alleviation over a Flash Cached storage form controller though not which much, as we could pretence a peep essence would be a same and- a Mercury cache would be a couple of microseconds nearer a server DRAM in latency terms.

 

It would be a couple of some-more microseconds nearer a server’s categorical mental recall than an SSD expostulate harvesting machine in a network-access storage array, presumption a interpretation essence were a same again. Whether this latency alleviation is poignant or not premonition can’t say; we need engineers and- dimensions to discuss it us that.

 

Judging by a life of Dell, EMC, and- NetApp work in this area, a indications have been which it is significant.

Texas Memory Systems view

 

Jamon Bowen, Director of Sales Engineering during Texas Memory Systems, blogged upon this topic, responding this question: “Doesn't being upon a PCIe train enlarge opening by being as tighten to a CPU as possible?”

 

He wrote: “Yes, though nowhere nearby a grade it is promoted. Going by a HBA to FC-attached RamSan adds about 10µs of latency –that's it. The reason which accessing SSDs by most SAN systems take 1-2 ms is because- of a program smoke-stack in a SAN conduct – not because- of a PCIe to FC conversion.

 

“For a business a preference to go with a PCIe RamSan-70 for a FC/IB-attached RamSan-630 comes down to either a design needs to share storage.”

 

TMS is not operative upon a approach to have a PCIe RamSan-70 label shareable: “If a design needs… common storage, use- a common storage systems.”

 

He is not observant server PCIe peep has no purpose in vast scale server infrastructures wanting co-ordination. This is how he sees which role:

 

In a common storage model, a vast core network is indispensable so any server can entrance a storage during a in accord with rate.  This is a singular of a categorical reasons a dedicated tall opening Storage Area Network is use-d for a server to storage network.

 

However, after there have been some-more than a couple of dozen servers, a network starts to turn rsther than large.  Now suppose if we wish to have tens of thousand-s of servers, a network becomes a widespread price …  In these unequivocally vast clusters a use- of a network-attached common storage indication becomes impractical.

 

A new- computing indication grown for these environments – a common zero scale-out cluster.   The simple thought is which any mechanism processes a partial of a interpretation which is stored locally; most nodes do this in parallel, and- afterwards an assembly step compiles a results. This approach all of a complicated interpretation to CPU transformation takes place inside of a singular server and- usually a formula have been gathered opposite a network.  This is a substructure of Hadoop as good as multiform interpretation warehouse- appliances.

 

In effect, rsther than than virtualized servers, a vast network, and- virtualized storage around a SAN or NAS array; a servers and- storage have been virtualized in a singular step regulating hardware which has CPU resources and- Direct-Attached Storage.

 

PCIe SSDs have been critical for this discriminate horizon because- pretty labelled servers have been unequivocally utterly absolute and- can precedence utterly a bit of storage performance. With a RamSan-70 any PCIe container can yield 2 GB/s of throughput whilst wise but delay inside a server. This most internal opening allows office building tall opening nodes for a scale-out shared-nothing cluster which balances a CPU and- storage resources.

 

Otherwise, a vast series of disks would be indispensable for any node or a nodes would have to scale to a reduce CPU energy than is straightforwardly accessible from mainstream servers. Both of these alternative options have disastrous energy and- space qualities which have them reduction desirable.

 

The climb of SSDs has supposing a quantum jump in storage price-performance during a in accord with price for genius as new- discriminate frameworks have been relocating in to mainstream applications.

 

A shared, centrally-managed storage form could feasible pre-load a server PCIe caches in Bowen’s shared-nothing, scale-out cluster model, though would afterwards have no serve purpose to play. We competence consider which TMS would see no purpose for common storage in such clusters because- it doesn’t wish to be gratified to suppliers of such systems for RamSan-70 sales.

 

It will be engaging see how HP, IBM and- Oracle perspective a purpose of app server peep cache technology, and- even some-more engaging to see server peep cache I/O poise with and- though flash-cached and- flash-tiered storage arrays.

Yunico 17 Jun, 2011


--
Source: http://www.digdod.com/wont-catch-us-with-our-flash-pants-down-says-netapp-1033963.html
~
Manage subscription | Powered by rssforward.com

Post a Comment

emo-but-icon

Most Top Article

Follow Us

Hot in week

item