ap

Results 1 - 25 of 21066Sort Results By: Published Date | Title | Company Name
By: Cray     Published Date: Jul 28, 2014
The Cray® CS300-LC™ liquid-cooled cluster supercomputer combines system performance and power savings, allowing users to reduce capital expense and operating costs.
Tags : cray, warm water cooled cooling, power savings, computing clusters, gpu computing, gpu cluster, x86 cluster, phi clusters, xeon phi clusters, hpc clusters
     Cray
By: Cray     Published Date: Aug 22, 2014
Learn how to optimize codes for faster application performance with Intel® Xeon® Phi™ coprocessor.
Tags : application performance, intel® xeon® phi™ coprocessor
     Cray
By: Cray     Published Date: Jul 02, 2015
The Cray XC series is a distributed memory system developed as part of Cray’s participation in the Defense Advanced Research Projects Agency’s (DARPA) High Productivity Computing System (HPCS) program. Previously codenamed “Cascade,” the Cray XC system is capable of sustained multi-petaflops performance and features a hybrid architecture combining multiple processor technologies, a high performance network and a high performance operating system and programming environment.
Tags : 
     Cray
By: Seagate     Published Date: Jan 27, 2015
This paper is the first to explore a recent breakthrough with the introduction of the High Performance Computing (HPC) industry’s first Intelligence Community Directive (ICD) 503 (DCID 6/3 PL4) certified compliant and secure scale-out parallel file system solution, Seagate ClusterStor™ Secure Data Appliance, which is designed to address government and business enterprise need for collaborative and secure information sharing within a Multi-Level Security (MLS) framework at Big Data and HPC Scale.
Tags : 
     Seagate
By: Seagate     Published Date: Jan 26, 2016
Finding oil and gas has always been a tricky proposition, given that reserves are primarily hidden underground, and often as not, under the ocean as well. The costs involved in acquiring rights to a site, drilling the wells, and operating them are considerable and has driven the industry to adopt advanced technologies for locating the most promising sites. As a consequence, oil and gas exploration today is essentially an exercise in scientific visualization and modeling, employing some of most advanced computational technologies available. High performance computing (HPC) systems are being used to fill these needs, primarily with x86-based cluster computers and Lustre storage systems. The technology is well developed, but the scale of the problem demands medium to large-sized systems, requiring a significant capital outlay and operating expense. The most powerful systems deployed by oil and gas companies are represented by petaflop-scale computers with multiple petabytes of attached
Tags : 
     Seagate
By: Green Revolution Cooling     Published Date: Feb 20, 2014
This paper examines the advantages of liquid submersion cooling and, in particular, takes a closer look at GreenDEF™, the dielectric mineral oil blend used by Green Revolution Cooling, a global leader in submersion cooling technologies. Further, the paper will address concerns of potential adopters of submersion systems and explain why these systems can actually improve performance in servers and protect expensive data center investments.
Tags : 
     Green Revolution Cooling
By: Green Revolution Cooling     Published Date: May 12, 2014
Download Green Revolution Cooling’s White Paper “Data Center Floor Space Utilization – Comparing Density in Liquid Submersion and Air Cooling Systems” to learn about the density of liquid submersion cooling, how looks can be deceiving and how, more times than not, liquid cooling once again has air beat.
Tags : green revolution, data center floor space
     Green Revolution Cooling
By: Intel     Published Date: Aug 06, 2014
Designing a large-scale, high-performance data storage system presents significant challenges. This paper describes a step-by-step approach to designing such a system and presents an iterative methodology that applies at both the component level and the system level. A detailed case study using the methodology described to design a Lustre storage system is presented.
Tags : intel, high performance storage
     Intel
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflop of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Enterprise Edition for Lustre* software and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters.
Tags : 
     Intel
By: InsideHPC Special Report     Published Date: Aug 17, 2016
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”
Tags : 
     InsideHPC Special Report
By: Altair     Published Date: Feb 19, 2014
PBS Works™, Altair's suite of on-demand cloud computing technologies, allows enterprises to maximize ROI on existing infrastructure assets. PBS Works is the most widely implemented software environment for managing grid, cloud, and cluster computing resources worldwide. The suite’s flagship product, PBS Professional®, allows enterprises to easily share distributed computing resources across geographic boundaries. With additional tools for portal-based submission, analytics, and data management, the PBS Works suite is a comprehensive solution for optimizing HPC environments. Leveraging a revolutionary “pay-for-use” unit-based business model, PBS Works delivers increased value and flexibility over conventional software-licensing models.
Tags : 
     Altair
By: Altair     Published Date: Feb 19, 2014
The Weizmann Institute of Science is one of the world’s leading multidisciplinary research institutions. Hundreds of scientists, laboratory technicians and research students embark daily on fascinating journeys into the unknown, seeking to improve our understanding of nature and our place within it. Groundbreaking medical and technological applications that have emerged from basic research conducted by Weizmann Institute scientists include: amniocentesis, a prenatal diagnostic test; sophisticated laser systems for high-precision diamond cutting; living polymerization, one of the most fundamental techniques of the modern polymer industry; and ribosome structure analysis, for which the Institute’s Professor Ada Yonath was awarded a Nobel Prize in Chemistry.
Tags : 
     Altair
By: Altair     Published Date: Jul 15, 2014
NSCEE’s new workload management solution including PBS Professional has reduced overall runtimes for processing workload. Furthermore, NSCEE credits PBS Professional with improving system manageability and extensibility thanks to these key features: • Lightweight solution • Very easy to manage • Not dependent on any specific operating system • Can be easily extended by adding site-specific processing plugins/hooks To learn more click to read the full paper
Tags : 
     Altair
By: IBM     Published Date: Jun 05, 2014
High-performance computing (HPC) continues to transform capabilities of organizations across a range of industries, whether the Human Genome Project or aerodynamics testing for race cars, we demonstrate how IBM Platform Computing solutions offer effective ways to unleash the power of HPC.
Tags : ibm, hpc
     IBM
By: IBM     Published Date: Jun 05, 2014
As new research and engineering environments are expanded to include more powerful computers to run increasingly complex computer simulations, the management of these heterogeneous computing environments continues to increase in complexity as well. Integrated solutions that include the Intel® Many Integrated Cores (MIC) architecture can dramatically boost aggregate performance for highly-parallel applications.
Tags : 
     IBM
By: IBM     Published Date: Jun 05, 2014
IBM NeXtscale System is changing system design dynamics in the HPC marketplace with ultra-dense servers. Enterprises looking for raw computing power and throughput to handle technical computing, cloud, grid and analytics should evaluate this new ultra-dense, highly scalable systems design. Read the paper by Clabby Analytics to learn more.
Tags : 
     IBM
By: IBM     Published Date: Jun 05, 2014
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
This demonstration shows how an organization using IBM Platform Computing workload managers can easily and securely tap resources in the IBM SoftLayer public cloud to handle periods of peak demand and reduce total IT infrastructure costs.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
While many storage vendors have submitted OpenStack drivers for their equipment, IBM has gone much further than that to support the OpenStack community. Read this paper and learn how.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
Are infrastructure limitations holding you back? - Users struggling with access to sufficient compute resources - Resources tapped out during peak demand times - Lack of budget, space or power to the environment IBM recently announced a new Platform Computing cloud service delivering hybrid cloud optimized for analytics and technical computing applications. The offering provides: - Ready to run IBM Platform LSF & Symphony clusters in the cloud - Seamless workload management, on premise and in the cloud - 24x7 cloud operation technical support - Dedicated, isolated physical machines for complete security Join us for this brief 20-minute webcast to learn how IBM offers a complete end-to-end hybrid cloud solution that may be key to improving your organization’s effectiveness and expediting time to market for your products.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
Learn how much IBM Platform Symphony can save you. Answer a few questions in this easy to use wizard about your infrastructure, application environment, personnel and your growth rate. The logic behind this tool will generate a powerful report you can use to realize significant cost savings.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
IBM Platform LSF is a powerful workload management platform for demanding, distributed HPC environments. It provides a comprehensive set of intelligent, policy-driven scheduling features that enable you to utilize all of your compute infrastructure resources and ensure optimal application performance.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
IBM Platform Symphony is a high performance SOA grid server that optimizes application performance and resource sharing. Platform Symphony runs distributed application services on a scalable, shared, heterogeneous grid and accelerates a wide variety of parallel applications, quickly computing results while making optimal use of available infrastructure. Platform Symphony Developer Edition enables developers to rapidly develop and test applications without the need for a production grid. After applications are running in the Developer Edition, they are guaranteed to run at scale once published to a scaled-out Platform Symphony grid. Platform Symphony Developer Edition also enables developers to easily test and verify Hadoop MapReduce applications against IBM Platform Symphony. By leveraging IBM Platform's Symphony's proven, low-latency grid computing solution, more MapReduce jobs can run faster, frequently with less infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com