high performance ap

Results 1 - 25 of 248Sort Results By: Published Date | Title | Company Name
By: Seagate     Published Date: Jan 27, 2015
This paper is the first to explore a recent breakthrough with the introduction of the High Performance Computing (HPC) industry’s first Intelligence Community Directive (ICD) 503 (DCID 6/3 PL4) certified compliant and secure scale-out parallel file system solution, Seagate ClusterStor™ Secure Data Appliance, which is designed to address government and business enterprise need for collaborative and secure information sharing within a Multi-Level Security (MLS) framework at Big Data and HPC Scale.
Tags : 
     Seagate
By: Seagate     Published Date: Jan 26, 2016
Finding oil and gas has always been a tricky proposition, given that reserves are primarily hidden underground, and often as not, under the ocean as well. The costs involved in acquiring rights to a site, drilling the wells, and operating them are considerable and has driven the industry to adopt advanced technologies for locating the most promising sites. As a consequence, oil and gas exploration today is essentially an exercise in scientific visualization and modeling, employing some of most advanced computational technologies available. High performance computing (HPC) systems are being used to fill these needs, primarily with x86-based cluster computers and Lustre storage systems. The technology is well developed, but the scale of the problem demands medium to large-sized systems, requiring a significant capital outlay and operating expense. The most powerful systems deployed by oil and gas companies are represented by petaflop-scale computers with multiple petabytes of attached
Tags : 
     Seagate
By: Intel     Published Date: Aug 06, 2014
Designing a large-scale, high-performance data storage system presents significant challenges. This paper describes a step-by-step approach to designing such a system and presents an iterative methodology that applies at both the component level and the system level. A detailed case study using the methodology described to design a Lustre storage system is presented.
Tags : intel, high performance storage
     Intel
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflop of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Enterprise Edition for Lustre* software and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters.
Tags : 
     Intel
By: IBM     Published Date: Jun 05, 2014
High-performance computing (HPC) continues to transform capabilities of organizations across a range of industries, whether the Human Genome Project or aerodynamics testing for race cars, we demonstrate how IBM Platform Computing solutions offer effective ways to unleash the power of HPC.
Tags : ibm, hpc
     IBM
By: IBM     Published Date: Jun 05, 2014
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
IBM Platform Symphony is a high performance SOA grid server that optimizes application performance and resource sharing. Platform Symphony runs distributed application services on a scalable, shared, heterogeneous grid and accelerates a wide variety of parallel applications, quickly computing results while making optimal use of available infrastructure. Platform Symphony Developer Edition enables developers to rapidly develop and test applications without the need for a production grid. After applications are running in the Developer Edition, they are guaranteed to run at scale once published to a scaled-out Platform Symphony grid. Platform Symphony Developer Edition also enables developers to easily test and verify Hadoop MapReduce applications against IBM Platform Symphony. By leveraging IBM Platform's Symphony's proven, low-latency grid computing solution, more MapReduce jobs can run faster, frequently with less infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: SGI     Published Date: Jun 08, 2016
With High Performance Computing (HPC) supercomputer systems that comprise tens, hundreds, or even thousands of computing cores, users are able to increase application performance and accelerate their workflows to realize dramatic productivity improvements. The performance potential often comes at the cost of complexity. By their very nature, supercomputers comprise a great number of components, both hardware and software, that must be installed, configured, tuned, and monitored to maintain maximum efficiency. In a recent report, IDC lists downtime and latency as two of the most important problems faced by data center managers.
Tags : 
     SGI
By: RYFT     Published Date: Apr 03, 2015
The new Ryft ONE platform is a scalable 1U device that addresses a major need in the fast-growing market for advanced analytics — avoiding I/O bottlenecks that can seriously impede analytics performance on today's hyperscale cluster systems. The Ryft ONE platform is designed for easy integration into existing cluster and other server environments, where it functions as a dedicated, high-performance analytics engine. IDC believes that the new Ryft ONE platform is well positioned to exploit the rapid growth we predict for the high-performance data analysis market.
Tags : ryft, ryft one platform, 1u deivce, advanced analytics, avoiding i/o bottlenecks, idc
     RYFT
By: IBM     Published Date: May 20, 2015
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Assembling a clustered environment can be complex due to the many software components that are required to enable technical and high performance computing (HPC) applications to run effectively. This webcast will demonstrate how IBM Platform Computing products simplify cluster deployment, use and management, bringing high performance capabilities to both experienced and new HPC administrators and users. We will also present examples of companies utilizing Platform Computing software today, to improve performance and utilization of their HPC environment while reducing costs.
Tags : 
     IBM
By: Seagate     Published Date: Sep 30, 2015
Although high-performance computing (HPC) often stands apart from a typical IT infrastructure—it uses highly specialized scale-out compute, networking and storage resources—it shares with mainstream IT the ability to push data center capacity to the breaking point. Much of this is due to data center inefficiencies caused by HPC storage growth. The Seagate® ClusterStor™ approach to scale-out HPC storage can significantly improve data center efficiency. No other vendor solution offers the same advantages.
Tags : 
     Seagate
By: IBM     Published Date: Feb 13, 2015
Value is migrating throughout the IT industry from hardware to software and services. High Performance Computing (HPC) is no exception. IT solution providers must position themselves to maximize their delivery of business value to their clients – particularly industrial customers who often use several applications that must be integrated in a business workflow. This requires systems and hardware vendors to invest in making their infrastructure “application ready”. With its Application Ready solutions, IBM is outflanking competitors in Technical Computing and fast-tracking the delivery of client business value by providing an expertly designed, tightly integrated and performance optimized architecture for several key industrial applications. These Application Ready solutions come with a complete high-performance cluster including servers, network, storage, operating system, management software, parallel file systems and other run time libraries, all with commercial-level solution s
Tags : 
     IBM
By: Dell and Intel®     Published Date: Nov 18, 2015
Unleash the extreme performance and scalability of the Lustre® parallel file system for high performance computing (HPC) workloads, including technical ‘big data’ applications common within today’s enterprises. The Dell Storage for HPC with Intel® Enterprise Edition (EE) for Lustre Solution allows end-users that need the benefits of large–scale, high bandwidth storage to tap the power and scalability of Lustre, with its simplified installation, configuration, and management features that are backed by Dell and Intel®.
Tags : 
     Dell and Intel®
By: Data Direct Networks     Published Date: Dec 31, 2015
When it comes to generating increasingly larger data sets and stretching the limits of high performance computing (HPC), the fi eld of genomics and next generation sequencing (NGS) is in the forefront. The major impetus for this data explosion began in 1990 when the U.S. kicked off the Human Genome Project, an ambitious project designed to sequence the three billion base pairs that constitute the complete set of DNA in the human body. Eleven years and $3 billion later the deed was done. This breakthrough was followed by a massive upsurge in genomics research and development that included rapid advances in sequencing using the power of HPC. Today an individual’s genome can be sequenced overnight for less than $1,000.
Tags : 
     Data Direct Networks
By: VMware     Published Date: Aug 17, 2014
Over the past several years, virtualization has made major inroads into enterprise IT infrastructures. And now it is moving into the realm of high performance computing (HPC), especially for such compute intensive applications as electronic design automation (EDA), life sciences, financial services and digital media entertainment.
Tags : vmware, virtualization
     VMware
By: Forcepoint     Published Date: May 14, 2019
Downtime Is Not an Option: High Availability Next Generation Firewall Access to applications, data, and resources on the Internet is mission-critical for every organization. Downtime is unacceptable. Security for that network must also be highly available and not cause performance degradation of the network. The increased workload of security devices as they analyze traffic and defend users from malicious attacks strains computing resources. The next generation of security solutions must build in high availability that can scale as the business changes. Download this whitepaper to find out how high availability is at the core of the Forcepoint NGFW (Next Generation Firewall).
Tags : 
     Forcepoint
By: Forcepoint     Published Date: May 14, 2019
Downtime Is Not an Option: High Availability Next Generation Firewall Access to applications, data, and resources on the Internet is mission-critical for every organization. Downtime is unacceptable. Security for that network must also be highly available and not cause performance degradation of the network. The increased workload of security devices as they analyze traffic and defend users from malicious attacks strains computing resources. The next generation of security solutions must build in high availability that can scale as the business changes. Download this whitepaper to find out how high availability is at the core of the Forcepoint NGFW (Next Generation Firewall).
Tags : 
     Forcepoint
By: ASG Software Solutions     Published Date: Jun 02, 2009
End-user expectations and high levels of performance against Service Level Agreements (SLAs) must be achieved or organizations risk the loss of business. This paper details key capabilities needed for successful end-user monitoring and provides critical considerations for delivering a successful end-user experience.
Tags : asg, cmdb, bsm, itil, bsm, bsp, metacmdb, user experience, lan, wan, configuration management, metadata, metacmdb, lob, sdm, service dependency mapping, ecommerce, bpm, workflow, itsm
     ASG Software Solutions
By: HPE & Intel®     Published Date: Oct 10, 2016
In the financial services industry (FSI), high-performance compute infrastructure is not optional; it’s a prerequisite for survival. No other industry generates more data, and few face the combination of challenges that financial services does: a rapidly changing competitive landscape, a complex regulatory environment, tightening margin pressure, exponential data growth, and demanding performance service-level agreements (SLAs).
Tags : 
     HPE & Intel®
By: Hewlett Packard Enterprise     Published Date: Oct 23, 2017
Midsized firms operate in the same hypercompetitive, digital environment as large enterprises—but with fewer technical and budget resources to draw from. That’s why it is essential for IT leaders to leverage best-practice processes and models that can help them support strategic business goals such as agility, innovation, speed-tomarket, and always-on business operations. A hybrid IT implementation can provide the infrastructure flexibility to support the next generation of high-performance, data-intensive applications. A hybrid foundation can also facilitate new, collaborative processes that bring together IT and business stakeholders.
Tags : digital environment, hyper competitive, business goals, hybrid it, technology, hpe
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: May 10, 2019
Nimble Secondary Flash array represents a new type of data storage, designed to maximize both capacity and performance. By adding high-performance flash storage to a capacity-optimized architecture, it provides a unique backup platform that lets you put your backup data to work. Nimble Secondary Flash array uses flash performance to provide both near-instant backup and recovery from any primary storage system. It is a single device for backup, disaster recovery, and even local archiving. By using flash, you can accomplish real work such as dev/test, QA, and analytics. Deep integration with Veeam’s leading backup software simplifies data lifecycle management and provides a path to cloud archiving.
Tags : 
     Hewlett Packard Enterprise
Start   Previous   1 2 3 4 5 6 7 8 9 10    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com