using

Results 1 - 25 of 3171Sort Results By: Published Date | Title | Company Name
By: Numascale     Published Date: Nov 20, 2013
Using commodity hardware and the “plug-and-play” NumaConnect interconnect, Numascale delivers true shared memory programming and simpler administration at standard HPC cluster price points. One such system currently offers users over 1,700 cores with a 4.6 TB single memory image
Tags : 
     Numascale
By: Intel     Published Date: Aug 06, 2014
Designing a large-scale, high-performance data storage system presents significant challenges. This paper describes a step-by-step approach to designing such a system and presents an iterative methodology that applies at both the component level and the system level. A detailed case study using the methodology described to design a Lustre storage system is presented.
Tags : intel, high performance storage
     Intel
By: Bright Computing     Published Date: May 05, 2014
A successful HPC cluster is a powerful asset for an organization. The following essential strategies are guidelines for the effective operation of an HPC cluster resource: 1. Plan To Manage the Cost of Software Complexity 2. Plan for Scalable Growth 3. Plan to Manage Heterogeneous Hardware/Software Solutions 4. Be Ready for the Cloud 5. Have an answer for the Hadoop Question Bright Cluster Manager addresses the above strategies remarkably well and allows HPC and Hadoop clusters to be easily created, monitored, and maintained using a single comprehensive user interface. Administrators can focus on more sophisticated, value-adding tasks rather than developing homegrown solutions that may cause problems as clusters grow and change. The end result is an efficient and successful HPC cluster that maximizes user productivity.
Tags : bright computing, hpc clusters
     Bright Computing
By: IBM     Published Date: Jun 05, 2014
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
This demonstration shows how an organization using IBM Platform Computing workload managers can easily and securely tap resources in the IBM SoftLayer public cloud to handle periods of peak demand and reduce total IT infrastructure costs.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
This book examines data storage and management challenges and explains software-defined storage, an innovative solution for high-performance, cost-effective storage using the IBM General Parallel File System (GPFS).
Tags : ibm
     IBM
By: IBM     Published Date: Sep 16, 2015
The IBM Spectrum Scale solution provided for up to 11x better throughput results than EMC Isilon for Spectrum Protect (TSM) workloads. Using published data, Edison compared a solution comprised of EMC® Isilon® against an IBM® Spectrum Scale™ solution. (IBM Spectrum Scale was formerly IBM® General Parallel File System™ or IBM® GPFS™, also known as code name Elastic Storage). For both solutions, IBM® Spectrum Protect™ (formerly IBM Tivoli® Storage Manager or IBM® TSM®) is used as a common workload performing the backups to target storage systems evaluated.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Learn how organizations in cancer research, speech recognition, financial services, automotive design and more are using IBM solutions to improve business results. IBM Software Defined Infrastructure enables organizations to deliver IT services in the most efficient way possible, leveraging resource utilization to accelerate time to results and reduce costs. It is the foundation for a fully integrated software defined environment, optimizing compute, storage and networking infrastructure so organizations can quickly adapt to changing business requirements.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : 
     IBM
By: NVIDIA & Bright Computing     Published Date: Sep 01, 2015
As of June 2015, the second fastest computer in the world, as measured by the Top500 list employed NVIDIA® GPUs. Of those systems on the same list that use accelerators 60% use NVIDIA GPUs. The performance kick provided by computing accelerators has pushed High Performance Computing (HPC) to new levels. When discussing GPU accelerators, the focus is often on the price-toperformance benefits to the end user. The true cost of managing and using GPUs goes far beyond the hardware price, however. Understanding and managing these costs helps provide more efficient and productive systems.
Tags : 
     NVIDIA & Bright Computing
By: Intel     Published Date: Apr 01, 2016
Since its beginnings in 1999 as a project at Carnegie Mellon University, Lustre, the highperformance parallel file system, has come a long, long way. Designed and always focusing on performance and scalability, it is now part of nearly every High Performance Computing (HPC) cluster on the Top500.org’s list of fastest computers in the world—present in 70 percent of the top 100 and nine out of the top ten. That’s an achievement for any developer—or community of developers, in the case of Lustre—to be proud of.
Tags : 
     Intel
By: Bright Computing     Published Date: Feb 22, 2016
Cloud computing offers organizations a clear opportunity to introduce operational efficiency that would be too difficult or costly to achieve internally. As such, we are continuing to see an increase in cloud adoption for workloads across the commercial market. However, recent research suggests that -- despite continued increases in companies reporting that they are using cloud computing -- the vast majority of corporate workloads remain on premise. It is clear that companies are carefully considering the security, privacy, and performance aspects of cloud transition and struggling to achieve cloud adoption with limited internal cloud expertise. Register to read more.
Tags : 
     Bright Computing
By: IBM     Published Date: Nov 14, 2014
IBM Platform HPC Total Cost of Ownership (TCO) tool offers a 3-year total cost of ownership view of your distributed computing environment and savings that you could potentially experience by using IBM Platform HPC in place of competing cluster management software. This model can estimate savings with the deployment of intelligent cluster management software. The use of this simple tool does not substitute for detailed analysis. To have IBM perform a business value assessment for your environment, and provide a more accurate estimate of potential savings, please contact your IBM representative.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
IBM® has created a proprietary implementation of the open-source Hadoop MapReduce run-time that leverages the IBM Platform™ Symphony distributed computing middleware while maintaining application-level compatibility with Apache Hadoop.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
This book examines data storage and management challenges and explains software-defined storage, an innovative solution for high-performance, cost-effective storage using the IBM General Parallel File System (GPFS).
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
View this series of short webcasts to learn how IBM Platform Computing products can help you ‘maximize the agility of your distributed computing environment’ by improving operational efficiency, simplify user experience, optimize application using and license sharing, address spikes in infrastructure demand and reduce data management costs.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
To quickly and economically meet new and peak demands, Platform LSF (SaaS) and Platform Symphony (SaaS) workload management as well as Elastic Storage on Cloud data management software can be delivered as a service, complete with SoftLayer cloud infrastructure and 24x7 support for technical computing and service-oriented workloads. Watch this demonstration to learn how the IBM Platform Computing Cloud Service can be used to simplify and accelerate financial risk management using IBM Algorithmics.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
IBM Platform HPC Total Cost of Ownership (TCO) tool offers a 3-year total cost of ownership view of your distributed computing environment and savings that you could potentially experience by using IBM Platform HPC in place of competing cluster management software. This model can estimate savings with the deployment of intelligent cluster management software. The use of this simple tool does not substitute for detailed analysis. To have IBM perform a business value assessment for your environment, and provide a more accurate estimate of potential savings, please contact your IBM representative.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
This major Hollywood studio wanted to improve the computer time required to render animated films. Using HPC solution powered by Platform LSF increased compute capacity allowing release of two major feature films and multiple animated shorts.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
The new Clusters for Dummies, e-book from IBM Platform Computing explains how clustering technology enables you to run higher quality simulations and shorten the time to discoveries. In this e-book, you’ll discover how to: Make a cluster work for your business Create clusters using commodity components Use workload management software for reliable results Use cluster management software for simplified administration Learn from case studies of clusters in action With clustering technology you can increase your compute capacity, accelerate innovation process, shrink time to insights, and improve your productivity, all of which will lead to increased competitiveness for your business.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
This demonstration shows how an organization using IBM Platform Computing workload managers can easily and securely tap resources in the IBM SoftLayer public cloud to handle periods of peak demand and reduce total IT infrastructure costs.
Tags : 
     IBM
By: TYAN     Published Date: Jun 06, 2016
Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.
Tags : 
     TYAN
By: Data Direct Networks     Published Date: Dec 31, 2015
Using high performance parallel storage solutions, geologists and researchers can now incorporate larger data sets and execute more seismic and reservoir simulations faster than ever before, enabling higher fidelity geological analysis and significantly reduced exploration risk. With high costs of exploration, oil and gas companies are increasingly turning to high performance DDN storage solutions to eliminate I/O bottlenecks, minimize risk and costs, while delivering a larger number of higher fidelity simulations in same time as traditional storage architectures.
Tags : 
     Data Direct Networks
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com