storage system

Results 1 - 25 of 356Sort Results By: Published Date | Title | Company Name
By: Seagate     Published Date: Jan 26, 2016
Finding oil and gas has always been a tricky proposition, given that reserves are primarily hidden underground, and often as not, under the ocean as well. The costs involved in acquiring rights to a site, drilling the wells, and operating them are considerable and has driven the industry to adopt advanced technologies for locating the most promising sites. As a consequence, oil and gas exploration today is essentially an exercise in scientific visualization and modeling, employing some of most advanced computational technologies available. High performance computing (HPC) systems are being used to fill these needs, primarily with x86-based cluster computers and Lustre storage systems. The technology is well developed, but the scale of the problem demands medium to large-sized systems, requiring a significant capital outlay and operating expense. The most powerful systems deployed by oil and gas companies are represented by petaflop-scale computers with multiple petabytes of attached
Tags : 
     Seagate
By: Intel     Published Date: Aug 06, 2014
Designing a large-scale, high-performance data storage system presents significant challenges. This paper describes a step-by-step approach to designing such a system and presents an iterative methodology that applies at both the component level and the system level. A detailed case study using the methodology described to design a Lustre storage system is presented.
Tags : intel, high performance storage
     Intel
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflop of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Enterprise Edition for Lustre* software and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters.
Tags : 
     Intel
By: IBM     Published Date: Jun 05, 2014
This book examines data storage and management challenges and explains software-defined storage, an innovative solution for high-performance, cost-effective storage using the IBM General Parallel File System (GPFS).
Tags : ibm
     IBM
By: IBM     Published Date: Sep 16, 2015
The IBM Spectrum Scale solution provided for up to 11x better throughput results than EMC Isilon for Spectrum Protect (TSM) workloads. Using published data, Edison compared a solution comprised of EMC® Isilon® against an IBM® Spectrum Scale™ solution. (IBM Spectrum Scale was formerly IBM® General Parallel File System™ or IBM® GPFS™, also known as code name Elastic Storage). For both solutions, IBM® Spectrum Protect™ (formerly IBM Tivoli® Storage Manager or IBM® TSM®) is used as a common workload performing the backups to target storage systems evaluated.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
This book examines data storage and management challenges and explains software-defined storage, an innovative solution for high-performance, cost-effective storage using the IBM General Parallel File System (GPFS).
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
Value is migrating throughout the IT industry from hardware to software and services. High Performance Computing (HPC) is no exception. IT solution providers must position themselves to maximize their delivery of business value to their clients – particularly industrial customers who often use several applications that must be integrated in a business workflow. This requires systems and hardware vendors to invest in making their infrastructure “application ready”. With its Application Ready solutions, IBM is outflanking competitors in Technical Computing and fast-tracking the delivery of client business value by providing an expertly designed, tightly integrated and performance optimized architecture for several key industrial applications. These Application Ready solutions come with a complete high-performance cluster including servers, network, storage, operating system, management software, parallel file systems and other run time libraries, all with commercial-level solution s
Tags : 
     IBM
By: Panasas     Published Date: Oct 02, 2014
HPC and technical computing environments require the collection, storage,and transmission of large-scale datasets. To meet these demands, datacenter architects must consider how increasing storage capacity over time will affect HPC workloads, performance, and system availability. While many enterprises have looked to scale-up NAS to meet their storage needs, this approach can lead to data islands that make it difficult to share data. Distributed, scale-out storage was developed to get around the technology limitations of scale-up NAS architectures.
Tags : 
     Panasas
By: Infinidat EMEA     Published Date: May 14, 2019
Big Data and analytics workloads represent a new frontier for organizations. Data is being collected from sources that did not exist 10 years ago. Mobile phone data, machine-generated data, and website interaction data are all being collected and analyzed. In addition, as IT budgets are already under pressure, Big Data footprints are getting larger and posing a huge storage challenge. This paper provides information on the issues that Big Data applications pose for storage systems and how choosing the correct storage infrastructure can streamline and consolidate Big Data and analytics applications without breaking the bank.
Tags : 
     Infinidat EMEA
By: Infinidat EMEA     Published Date: May 14, 2019
Continuous data availability is a key business continuity requirement for storage systems. It ensures protection against downtime in case of serious incidents or disasters and enables recovery to an operational state within a reasonably short period. To ensure continuous availability, storage solutions need to meet resiliency, recovery, and contingency requirements outlined by the organization.
Tags : 
     Infinidat EMEA
By: Infinidat EMEA     Published Date: May 14, 2019
Even after decades of industry and technology advancements, there still is no universal, integrated storage solution that can reduce risk, enable profitability, eliminate complexity and seamlessly integrate into the way businesses operate and manage data at scale? To reach these goals, there are capabilities that are required to achieve the optimum results at the lowest cost. These capabilities include availability, reliability, performance, density, manageability and application ecosystem integration? This paper outlines a better way to think about storing data at scale—solving these problems not only today, but well into the future?
Tags : 
     Infinidat EMEA
By: Infinidat EMEA     Published Date: Oct 10, 2019
Big Data- und Analytik-Workloads bringen für Unternehmen neue Herausforderungen mit sich. Die erfassten Daten stammen aus Quellen, die vor zehn Jahren noch gar nicht existierten. Es werden Daten von Mobiltelefonen, maschinengenerierte Daten und Daten aus Webseiten-Interaktionen erfasst und analysiert. In Zeiten knapper IT-Budgets wird die Lage zusätzlich dadurch verschärft, dass die Big Data-Volumen immer größer werden und zu enormen Speicherproblemen führen. Das vorliegende White Paper informiert über die Probleme, die Big Data-Anwendungen für Storage-Systeme mit sich bringen, sowie darüber, wie die Auswahl der richtigen Storage-Infrastruktur Big Data- und Analytik-Anwendungen optimieren kann, ohne das Budget zu sprengen.
Tags : 
     Infinidat EMEA
By: Infinidat EMEA     Published Date: Oct 10, 2019
Auch nach jahrzehntelangen Fortschritten in Industrie und Technik existiert immer noch keine universelle integrierte Storage-Lösung, die das Risiko verringern, die Profi tabilität sichern, die Komplexität beseitigen und sich nahtlos in die Betriebs- und Verwaltungsprozesse der Unternehmen für Daten in großem Maßstab einfügen können. Zur Erreichung dieser Ziele gibt es Fähigkeiten, um optimale Ergebnisse zu möglichst niedrigen Kosten zu erreichen. Diese Fähigkeiten umfassen Verfügbarkeit, Zuverlässigkeit, Performance, Datendichte, Managementfähigkeit und Integration in Anwendungs-Ökosysteme. Das vorliegende White Paper skizziert ein besseres Konzept für die Speicherung von Daten in großem Maßstab, so dass diese Probleme nicht nur heute, sondern auch in Zukunft gelöst werden.
Tags : 
     Infinidat EMEA
By: Bell Micro     Published Date: Jun 14, 2010
This paper describes the benefits, features and best practices using a XIV in a SAP environment.
Tags : bell micro, sap application, best practices, ibm xiv, storage system
     Bell Micro
By: Bell Micro     Published Date: Jun 14, 2010
In this Customer Solution Profile a Fortune five organization put the IBM XIV storage system through its paces in their Microsoft Exchange 2003 environment.
Tags : bell micro, microsoft exchange, ibm xiv storage system, resilience, performance, email system
     Bell Micro
By: Bell Micro     Published Date: Jun 14, 2010
This paper discusses how the IBM XIV Storage System's revolutionary built-in virtualization architecture provides a way to drastically reduce the costs of managing storage systems.
Tags : bell micro, ibm xiv, storage system management, virtualization architecture
     Bell Micro
By: Bell Micro     Published Date: Jun 14, 2010
In this white paper, we describe the XIV snapshot architecture and explain its underlying advantages in terms of performance, ease of use, flexibility and reliability.
Tags : bell micro, hardware, storage management, vmware, ibm xiv
     Bell Micro
By: NetApp     Published Date: Dec 13, 2013
Interested in running a Hadoop proof of concept on enterprise-class storage? Download this solutions guide to get a technical overview on building Hadoop on NetApp E-series storage. NetApp Open Solution for Hadoop delivers big analytics with preengineered, compatible, and supported solutions based on high-quality storage platforms so you reduce the cost, schedule, and risk of do-it-yourself systems and relieving the skills gap most organizations have with Hadoop. See how on going operational and maintenance costs can be reduced with a high available and scalable Hadoop solution.
Tags : open solutions, hadoop solutions guide
     NetApp
By: Gigaom     Published Date: Oct 24, 2019
A huge array of BI, analytics, data prep and machine learning platforms exist in the market, and each of those may have a variety of connectors to different databases, file systems and applications, both on-premises and in the cloud. But in today’s world of myriad data sources, simple connectivity is just table stakes. What’s essential is a data access strategy that accounts for the variety of data sources out there, including relational and NoSQL databases, file formats across storage systems — even enterprise SaaS applications — and can make them all consumable by tools and applications built for tabular data. In today’s data-driven business environment, fitting omni-structured data and disparate applications into a consistent data API makes comprehensive integration, and insights, achievable. Want to learn more and map out your data access strategy? Join us for this free 1-hour webinar from GigaOm Research. The webinar features GigaOm analyst Andrew Brust and special guests, Eric
Tags : 
     Gigaom
By: Hewlett Packard Enterprise     Published Date: Jul 25, 2019
"Flash has permeated enterprise storage in small, medium-sized, and large enterprises as well as among webscale customers like cloud and service providers. It is available in a variety of different system architectures, including internal storage, hyperconverged and converged platforms, and shared storage arrays. Revenue driven by shared storage arrays, of both the hybrid flash array (HFA) and the all-flash array (AFA) type, is much larger than from other segments today and will continue to dominate enterprise storage spend through at least 2020."
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: May 10, 2019
Nimble Secondary Flash array represents a new type of data storage, designed to maximize both capacity and performance. By adding high-performance flash storage to a capacity-optimized architecture, it provides a unique backup platform that lets you put your backup data to work. Nimble Secondary Flash array uses flash performance to provide both near-instant backup and recovery from any primary storage system. It is a single device for backup, disaster recovery, and even local archiving. By using flash, you can accomplish real work such as dev/test, QA, and analytics. Deep integration with Veeam’s leading backup software simplifies data lifecycle management and provides a path to cloud archiving.
Tags : 
     Hewlett Packard Enterprise
By: HPE     Published Date: Jan 04, 2016
EMC to 3PAR Online Import Utility leverages storage federation and Peer Motion to migrate data from EMC Clariion CX4 and VNX systems to HP 3PAR StoreServ. In this ChalkTalk, HPStorageGuy Calvin Zito gives an overview.
Tags : 
     HPE
By: HPE Intel     Published Date: Jan 11, 2016
Want to know where flash storage technology is heading? Watch Part V of our "Mainstreaming of Flash" video series to hear what's next with this exciting technology! HPE 3PAR StoreServ was built to meet the extreme requirements of massively consolidated cloud service providers. Its remarkable speed—3M+ IOPS—and proven system architecture has been extended to transform mainstream midrange and enterprise deployments, with solutions from a few TBs up to 15PB scale.
Tags : 
     HPE Intel
By: Hewlett Packard Enterprise     Published Date: Aug 15, 2016
The increasing demands of application and database workloads, growing numbers of virtual machines, and more powerful processors are driving demand for ever-faster storage systems. Increasingly, IT organizations are turning to solid-state storage to meet these demands, with hybrid and all-flash arrays taking the place of traditional disk storage for high performance workloads. Download this white paper to learn how you can get the most from your storage environment.
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Mar 26, 2018
Today’s data centers are expected to deploy, manage, and report on different tiers of business applications, databases, virtual workloads, home directories, and file sharing simultaneously. They also need to co-locate multiple systems while sharing power and energy. This is true for large as well as small environments. The trend in modern IT is to consolidate as much as possible to minimize cost and maximize efficiency of data centers and branch offices. HPE 3PAR StoreServ is highly efficient, flash-optimized storage engineered for the true convergence of block, file, and object access to help consolidate diverse workloads efficiently. HPE 3PAR OS and converged controllers incorporate multiprotocol support into the heart of the system architecture
Tags : 
     Hewlett Packard Enterprise
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com