del

Results 1 - 25 of 8524Sort Results By: Published Date | Title | Company Name
By: Seagate     Published Date: Jan 26, 2016
Finding oil and gas has always been a tricky proposition, given that reserves are primarily hidden underground, and often as not, under the ocean as well. The costs involved in acquiring rights to a site, drilling the wells, and operating them are considerable and has driven the industry to adopt advanced technologies for locating the most promising sites. As a consequence, oil and gas exploration today is essentially an exercise in scientific visualization and modeling, employing some of most advanced computational technologies available. High performance computing (HPC) systems are being used to fill these needs, primarily with x86-based cluster computers and Lustre storage systems. The technology is well developed, but the scale of the problem demands medium to large-sized systems, requiring a significant capital outlay and operating expense. The most powerful systems deployed by oil and gas companies are represented by petaflop-scale computers with multiple petabytes of attached
Tags : 
     Seagate
By: Numascale     Published Date: Nov 20, 2013
Using commodity hardware and the “plug-and-play” NumaConnect interconnect, Numascale delivers true shared memory programming and simpler administration at standard HPC cluster price points. One such system currently offers users over 1,700 cores with a 4.6 TB single memory image
Tags : 
     Numascale
By: Intel     Published Date: Aug 06, 2014
Around the world and across all industries, high-performance computing is being used to solve today’s most important and demanding problems. More than ever, storage solutions that deliver high sustained throughput are vital for powering HPC and Big Data workloads.
Tags : intel, enterprise edition lustre software
     Intel
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflop of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Enterprise Edition for Lustre* software and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters.
Tags : 
     Intel
By: Altair     Published Date: Feb 19, 2014
PBS Works™, Altair's suite of on-demand cloud computing technologies, allows enterprises to maximize ROI on existing infrastructure assets. PBS Works is the most widely implemented software environment for managing grid, cloud, and cluster computing resources worldwide. The suite’s flagship product, PBS Professional®, allows enterprises to easily share distributed computing resources across geographic boundaries. With additional tools for portal-based submission, analytics, and data management, the PBS Works suite is a comprehensive solution for optimizing HPC environments. Leveraging a revolutionary “pay-for-use” unit-based business model, PBS Works delivers increased value and flexibility over conventional software-licensing models.
Tags : 
     Altair
By: Altair     Published Date: Jul 15, 2014
Impact analysis or drop testing is one of the most important stages of product design and development, and software that can simulate this testing accurately yields dramatic cost and time-to-market benefits for manufacturers. Dell, Intel and Altair have collaborated to analyze a virtual drop test solution with integrated simulation and optimization analysis, delivering proven gains in speed and accuracy. With this solution, engineers can explore more design alternatives for improved product robustness and reliability. As a result, manufacturers can significantly reduce the time to develop high-performing designs, improving product quality while minimizing time to delivery
Tags : 
     Altair
By: Bright Computing     Published Date: May 05, 2014
A successful HPC cluster is a powerful asset for an organization. The following essential strategies are guidelines for the effective operation of an HPC cluster resource: 1. Plan To Manage the Cost of Software Complexity 2. Plan for Scalable Growth 3. Plan to Manage Heterogeneous Hardware/Software Solutions 4. Be Ready for the Cloud 5. Have an answer for the Hadoop Question Bright Cluster Manager addresses the above strategies remarkably well and allows HPC and Hadoop clusters to be easily created, monitored, and maintained using a single comprehensive user interface. Administrators can focus on more sophisticated, value-adding tasks rather than developing homegrown solutions that may cause problems as clusters grow and change. The end result is an efficient and successful HPC cluster that maximizes user productivity.
Tags : bright computing, hpc clusters
     Bright Computing
By: IBM     Published Date: Jun 05, 2014
Are infrastructure limitations holding you back? - Users struggling with access to sufficient compute resources - Resources tapped out during peak demand times - Lack of budget, space or power to the environment IBM recently announced a new Platform Computing cloud service delivering hybrid cloud optimized for analytics and technical computing applications. The offering provides: - Ready to run IBM Platform LSF & Symphony clusters in the cloud - Seamless workload management, on premise and in the cloud - 24x7 cloud operation technical support - Dedicated, isolated physical machines for complete security Join us for this brief 20-minute webcast to learn how IBM offers a complete end-to-end hybrid cloud solution that may be key to improving your organization’s effectiveness and expediting time to market for your products.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Aug 27, 2014
Are infrastructure limitations holding you back? Users struggling with access to sufficient compute resources Resources tapped out during peak demand times Lack of budget, space or power to the environment IBM recently announced a new Platform Computing cloud service delivering hybrid cloud optimized for analytics and technical computing applications. The offering provides: Ready to run IBM Platform LSF & Symphony clusters in the cloud Seamless workload management, on premise and in the cloud 24x7 cloud operation technical support Dedicated, isolated physical machines for complete security Join us for this brief 20-minute webcast to learn how IBM offers a complete end-to-end hybrid cloud solution that may be key to improving your organization’s effectiveness and expediting time to market for your products.
Tags : ibm, hybrid cloud
     IBM
By: IBM     Published Date: Sep 02, 2014
With tougher regulations and continuing market volatility, financial firms are moving to active risk management with a focus on counterparty risk. Firms are revamping their risk and trading practices from top to bottom. They are adopting new risk models and frameworks that support a holistic view of risk. Banks recognize that technology is critical for this transformation, and are adding state-of-the-art enterprise risk management solutions, high performance data and grid management software, and fast hardware. Join IBM Algorithmics and IBM Platform Computing to gain insights on this trend and on technologies for enabling active "real-time" risk management.
Tags : 
     IBM
By: IBM     Published Date: Sep 02, 2014
Comparing IBM performance against competitive offerings to help organizations deploy workloads with confidence IBM Platform Computing Cloud Service lets users economically add computing capacity by accessing ready-to-use clusters in the cloud-delivering high performance that compares favorably to cloud offerings from other providers. Tests show that the IBM service delivers the best (or ties for the best) absolute performance in all test categories. Learn More.
Tags : ibm, platform computing cloud services, benchmarking performance
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Learn how organizations in cancer research, speech recognition, financial services, automotive design and more are using IBM solutions to improve business results. IBM Software Defined Infrastructure enables organizations to deliver IT services in the most efficient way possible, leveraging resource utilization to accelerate time to results and reduce costs. It is the foundation for a fully integrated software defined environment, optimizing compute, storage and networking infrastructure so organizations can quickly adapt to changing business requirements.
Tags : 
     IBM
By: Penguin Computing     Published Date: Mar 23, 2015
The Open Compute Project, initiated by Facebook as a way to increase computing power while lowering associated costs with hyper-scale computing, has gained a significant industry following. While the initial specifications were created for a Web 2.0 environment, Penguin Computing has adapted these concepts to create a complete hardware ecosystem solution that addresses these needs and more. The Tundra OpenHPC system is applicable to a wide range of HPC challenges and delivers the most requested features for data center architects.
Tags : penguin computing, open computing, computing power, hyper-scale computing, tundra openhpc
     Penguin Computing
By: IBM     Published Date: May 20, 2015
Software defined storage is enterprise class storage that uses standard hardware with all the important storage and management functions performed in intelligent software. Software defined storage delivers automated, policy-driven, application-aware storage services through orchestration of the underlining storage infrastructure in support of an overall software defined environment.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
In this white paper, we look at various cloud models, and assess their suitability to solve IT challenges. We provide recommendations on what to look for in a cloud provider. Finally, we take a look at the IBM Cloud portfolio.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Are infrastructure limitations holding you back? Users struggling with access to sufficient compute resources Resources tapped out during peak demand times Lack of budget, space or power to the environment IBM recently announced a new Platform Computing cloud service delivering hybrid cloud optimized for analytics and technical computing applications. The offering provides: Ready to run IBM Platform LSF & Symphony clusters in the cloud Seamless workload management, on premise and in the cloud 24x7 cloud operation technical support Dedicated, isolated physical machines for complete security Join us for this brief 20-minute webcast to learn how IBM offers a complete end-to-end hybrid cloud solution that may be key to improving your organization’s effectiveness and expediting time to market for your products.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
The latest generation of highly scalable HPC clusters is a game changer for design optimization challenges. HPC clusters, built on a modular, multi-core x86 architecture, provide a cost effective and accessible platform on which to conduct realistic simulation compared with the “big iron” HPC systems of the past or with the latest workstation models. This paper provides 6 steps to making clusters a reality for any business.
Tags : 
     IBM
By: Bull     Published Date: Dec 04, 2014
Bull, an Atos company, is a leader in Big Data, HPC and cyber-security with a worldwide market presence. Bull has extensive experience in implementing and running petaflopsscale supercomputers. The exascale program is a new step forward in Bull’s strategy to deliver exascale supercomputers capable of addressing the new challenges of science, industry and society.
Tags : bull, exascale, big data, hpc, cyber security, supercomputers
     Bull
By: Intel     Published Date: Sep 16, 2014
In this Guide, we take a look at what an HPC solution like Lustre can deliver for a broad community of business and commercial organizations struggling with the challenge of big data and demanding storage growth.
Tags : intel, lustre*, solution for business
     Intel
By: IBM     Published Date: Nov 14, 2014
IBM Platform HPC Total Cost of Ownership (TCO) tool offers a 3-year total cost of ownership view of your distributed computing environment and savings that you could potentially experience by using IBM Platform HPC in place of competing cluster management software. This model can estimate savings with the deployment of intelligent cluster management software. The use of this simple tool does not substitute for detailed analysis. To have IBM perform a business value assessment for your environment, and provide a more accurate estimate of potential savings, please contact your IBM representative.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
A high performance computing (HPC) cluster refers to a group of servers built from off-the-shelf components that are connected via certain interconnect technologies. A cluster can deliver aggregated computing power from its many processors with many cores — sometimes hundreds, even thousands — to meet the processing demands of more complex engineering software, and therefore deliver results faster than individual workstations. If your company is in the majority that could benefit from access to more computing power, a cluster comprised of commodity servers may be a viable solution to consider, especially now that they’re easier to purchase, deploy, configure and maintain than ever before. Read more and learn about the '5 Easy Steps to a High Performance Cluster'.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
The latest generation of highly scalable HPC clusters is a game changer for design optimization challenges. HPC clusters, built on a modular, multi-core x86 architecture, provide a cost effective and accessible platform on which to conduct realistic simulation compared with the “big iron” HPC systems of the past or with the latest workstation models. This paper provides 6 steps to making clusters a reality for any business.
Tags : 
     IBM
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com