app

Results 1 - 25 of 12091Sort Results By: Published Date | Title | Company Name
By: Cray     Published Date: Aug 22, 2014
Learn how to optimize codes for faster application performance with Intel® Xeon® Phi™ coprocessor.
Tags : application performance, intel® xeon® phi™ coprocessor
     Cray
By: Seagate     Published Date: Jan 27, 2015
This paper is the first to explore a recent breakthrough with the introduction of the High Performance Computing (HPC) industry’s first Intelligence Community Directive (ICD) 503 (DCID 6/3 PL4) certified compliant and secure scale-out parallel file system solution, Seagate ClusterStor™ Secure Data Appliance, which is designed to address government and business enterprise need for collaborative and secure information sharing within a Multi-Level Security (MLS) framework at Big Data and HPC Scale.
Tags : 
     Seagate
By: Intel     Published Date: Aug 06, 2014
Designing a large-scale, high-performance data storage system presents significant challenges. This paper describes a step-by-step approach to designing such a system and presents an iterative methodology that applies at both the component level and the system level. A detailed case study using the methodology described to design a Lustre storage system is presented.
Tags : intel, high performance storage
     Intel
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflop of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Enterprise Edition for Lustre* software and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters.
Tags : 
     Intel
By: InsideHPC Special Report     Published Date: Aug 17, 2016
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”
Tags : 
     InsideHPC Special Report
By: Altair     Published Date: Feb 19, 2014
The Weizmann Institute of Science is one of the world’s leading multidisciplinary research institutions. Hundreds of scientists, laboratory technicians and research students embark daily on fascinating journeys into the unknown, seeking to improve our understanding of nature and our place within it. Groundbreaking medical and technological applications that have emerged from basic research conducted by Weizmann Institute scientists include: amniocentesis, a prenatal diagnostic test; sophisticated laser systems for high-precision diamond cutting; living polymerization, one of the most fundamental techniques of the modern polymer industry; and ribosome structure analysis, for which the Institute’s Professor Ada Yonath was awarded a Nobel Prize in Chemistry.
Tags : 
     Altair
By: IBM     Published Date: Jun 05, 2014
As new research and engineering environments are expanded to include more powerful computers to run increasingly complex computer simulations, the management of these heterogeneous computing environments continues to increase in complexity as well. Integrated solutions that include the Intel® Many Integrated Cores (MIC) architecture can dramatically boost aggregate performance for highly-parallel applications.
Tags : 
     IBM
By: IBM     Published Date: Jun 05, 2014
Are infrastructure limitations holding you back? - Users struggling with access to sufficient compute resources - Resources tapped out during peak demand times - Lack of budget, space or power to the environment IBM recently announced a new Platform Computing cloud service delivering hybrid cloud optimized for analytics and technical computing applications. The offering provides: - Ready to run IBM Platform LSF & Symphony clusters in the cloud - Seamless workload management, on premise and in the cloud - 24x7 cloud operation technical support - Dedicated, isolated physical machines for complete security Join us for this brief 20-minute webcast to learn how IBM offers a complete end-to-end hybrid cloud solution that may be key to improving your organization’s effectiveness and expediting time to market for your products.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
Learn how much IBM Platform Symphony can save you. Answer a few questions in this easy to use wizard about your infrastructure, application environment, personnel and your growth rate. The logic behind this tool will generate a powerful report you can use to realize significant cost savings.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
IBM Platform LSF is a powerful workload management platform for demanding, distributed HPC environments. It provides a comprehensive set of intelligent, policy-driven scheduling features that enable you to utilize all of your compute infrastructure resources and ensure optimal application performance.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
IBM Platform Symphony is a high performance SOA grid server that optimizes application performance and resource sharing. Platform Symphony runs distributed application services on a scalable, shared, heterogeneous grid and accelerates a wide variety of parallel applications, quickly computing results while making optimal use of available infrastructure. Platform Symphony Developer Edition enables developers to rapidly develop and test applications without the need for a production grid. After applications are running in the Developer Edition, they are guaranteed to run at scale once published to a scaled-out Platform Symphony grid. Platform Symphony Developer Edition also enables developers to easily test and verify Hadoop MapReduce applications against IBM Platform Symphony. By leveraging IBM Platform's Symphony's proven, low-latency grid computing solution, more MapReduce jobs can run faster, frequently with less infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Aug 27, 2014
Are infrastructure limitations holding you back? Users struggling with access to sufficient compute resources Resources tapped out during peak demand times Lack of budget, space or power to the environment IBM recently announced a new Platform Computing cloud service delivering hybrid cloud optimized for analytics and technical computing applications. The offering provides: Ready to run IBM Platform LSF & Symphony clusters in the cloud Seamless workload management, on premise and in the cloud 24x7 cloud operation technical support Dedicated, isolated physical machines for complete security Join us for this brief 20-minute webcast to learn how IBM offers a complete end-to-end hybrid cloud solution that may be key to improving your organization’s effectiveness and expediting time to market for your products.
Tags : ibm, hybrid cloud
     IBM
By: IBM     Published Date: Sep 02, 2014
Advanced analytics strategies yield the greatest benefits in terms of improving patient and business outcomes when applied across the entire healthcare ecosystem. But the challenge of collaborating across organizational boundaries in order to share information and insights is daunting to many stakeholders.
Tags : ibm, ecosystem
     IBM
By: IBM     Published Date: Sep 16, 2015
Docker is a lightweight Linux container technology built on top of LXC (LinuX Containers) and cgroup (control groups), which offers many attractive benefits for HPC environments. Find out more about how IBM Platform LSF® and Docker have been integrated outside the core of Platform LSF with a real world example involving the application BWA (bio-bwa.sourceforge.net). This step-by-step white paper provides details on how to get started with the IBM Platform LSF and Docker integration which is available via open beta on Service Management Connect.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: SGI     Published Date: Jun 08, 2016
With High Performance Computing (HPC) supercomputer systems that comprise tens, hundreds, or even thousands of computing cores, users are able to increase application performance and accelerate their workflows to realize dramatic productivity improvements. The performance potential often comes at the cost of complexity. By their very nature, supercomputers comprise a great number of components, both hardware and software, that must be installed, configured, tuned, and monitored to maintain maximum efficiency. In a recent report, IDC lists downtime and latency as two of the most important problems faced by data center managers.
Tags : 
     SGI
By: Penguin Computing     Published Date: Mar 23, 2015
The Open Compute Project, initiated by Facebook as a way to increase computing power while lowering associated costs with hyper-scale computing, has gained a significant industry following. While the initial specifications were created for a Web 2.0 environment, Penguin Computing has adapted these concepts to create a complete hardware ecosystem solution that addresses these needs and more. The Tundra OpenHPC system is applicable to a wide range of HPC challenges and delivers the most requested features for data center architects.
Tags : penguin computing, open computing, computing power, hyper-scale computing, tundra openhpc
     Penguin Computing
By: IBM     Published Date: May 20, 2015
Software defined storage is enterprise class storage that uses standard hardware with all the important storage and management functions performed in intelligent software. Software defined storage delivers automated, policy-driven, application-aware storage services through orchestration of the underlining storage infrastructure in support of an overall software defined environment.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
In this white paper, we look at various cloud models, and assess their suitability to solve IT challenges. We provide recommendations on what to look for in a cloud provider. Finally, we take a look at the IBM Cloud portfolio.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Assembling a clustered environment can be complex due to the many software components that are required to enable technical and high performance computing (HPC) applications to run effectively. This webcast will demonstrate how IBM Platform Computing products simplify cluster deployment, use and management, bringing high performance capabilities to both experienced and new HPC administrators and users. We will also present examples of companies utilizing Platform Computing software today, to improve performance and utilization of their HPC environment while reducing costs.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Are infrastructure limitations holding you back? Users struggling with access to sufficient compute resources Resources tapped out during peak demand times Lack of budget, space or power to the environment IBM recently announced a new Platform Computing cloud service delivering hybrid cloud optimized for analytics and technical computing applications. The offering provides: Ready to run IBM Platform LSF & Symphony clusters in the cloud Seamless workload management, on premise and in the cloud 24x7 cloud operation technical support Dedicated, isolated physical machines for complete security Join us for this brief 20-minute webcast to learn how IBM offers a complete end-to-end hybrid cloud solution that may be key to improving your organization’s effectiveness and expediting time to market for your products.
Tags : 
     IBM
By: Adaptive Computing     Published Date: Feb 21, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com