ncr

Results 1 - 25 of 5087Sort Results By: Published Date | Title | Company Name
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflop of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Enterprise Edition for Lustre* software and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters.
Tags : 
     Intel
By: Altair     Published Date: Feb 19, 2014
PBS Works™, Altair's suite of on-demand cloud computing technologies, allows enterprises to maximize ROI on existing infrastructure assets. PBS Works is the most widely implemented software environment for managing grid, cloud, and cluster computing resources worldwide. The suite’s flagship product, PBS Professional®, allows enterprises to easily share distributed computing resources across geographic boundaries. With additional tools for portal-based submission, analytics, and data management, the PBS Works suite is a comprehensive solution for optimizing HPC environments. Leveraging a revolutionary “pay-for-use” unit-based business model, PBS Works delivers increased value and flexibility over conventional software-licensing models.
Tags : 
     Altair
By: Altair     Published Date: Jul 15, 2014
With Cray and Altair, engineers have the computational systems they need to perform advanced subsea computational fluid dynamics (CFD) analysis with better speed, scalability and accuracy. With Altair’s AcuSolve CFD solver running on Cray® XC30™ supercomputer systems, operators and engineers responsible for riser system design and analysis can increase component life, reduce uncertainty and improve the overall safety of their ultra-deep-water systems while still meeting their demanding development schedule.
Tags : 
     Altair
By: IBM     Published Date: Jun 05, 2014
As new research and engineering environments are expanded to include more powerful computers to run increasingly complex computer simulations, the management of these heterogeneous computing environments continues to increase in complexity as well. Integrated solutions that include the Intel® Many Integrated Cores (MIC) architecture can dramatically boost aggregate performance for highly-parallel applications.
Tags : 
     IBM
By: IBM     Published Date: Jun 05, 2014
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Sep 02, 2014
Whether engaged in genome sequencing, drug design, product analysis or risk management, life sciences research needs high-performance technical environments with the ability to process massive amounts of data and support increasingly sophisticated simulations and analyses. Learn how IBM solutions such as IBM® Platform Computing™ high-performance cluster, grid and high-performance computing (HPC) cloud management software can help life sciences organizations transform and integrate their compute environments to develop products better, faster and at less expense.
Tags : ibm, life sciences, platform computing
     IBM
By: IBM     Published Date: Sep 02, 2014
This brief webcast will cover the new and enhanced capabilities of Elastic Storage 4.1, including native encryption and secure erase, flash-accelerated performance, network performance monitoring, global data sharing, NFS data migration and more. IBM GPFS (Elastic storage) may be the key to improving your organization's effectiveness and can help define a clear data management strategy for future data growth and support.
Tags : ibm, elastic storage
     IBM
By: IBM     Published Date: Sep 16, 2015
According to our global study of more than 800 cloud decision makers and users are becoming increasingly focused on the business value cloud provides. Cloud is integral to mobile, social and analytics initiatives – and the big data management challenge that often comes with them and it helps power the entire suite of game-changing technologies. Enterprises can aim higher when these deployments are riding on the cloud. Mobile, analytics and social implementations can be bigger, bolder and drive greater impact when backed by scalable infrastructure. In addition to scale, cloud can provide integration, gluing the individual technologies into more cohesive solutions. Learn how companies are gaining a competitive advanatge with cloud computing.
Tags : 
     IBM
By: SGI     Published Date: Jun 08, 2016
With High Performance Computing (HPC) supercomputer systems that comprise tens, hundreds, or even thousands of computing cores, users are able to increase application performance and accelerate their workflows to realize dramatic productivity improvements. The performance potential often comes at the cost of complexity. By their very nature, supercomputers comprise a great number of components, both hardware and software, that must be installed, configured, tuned, and monitored to maintain maximum efficiency. In a recent report, IDC lists downtime and latency as two of the most important problems faced by data center managers.
Tags : 
     SGI
By: Penguin Computing     Published Date: Mar 23, 2015
The Open Compute Project, initiated by Facebook as a way to increase computing power while lowering associated costs with hyper-scale computing, has gained a significant industry following. While the initial specifications were created for a Web 2.0 environment, Penguin Computing has adapted these concepts to create a complete hardware ecosystem solution that addresses these needs and more. The Tundra OpenHPC system is applicable to a wide range of HPC challenges and delivers the most requested features for data center architects.
Tags : penguin computing, open computing, computing power, hyper-scale computing, tundra openhpc
     Penguin Computing
By: IBM     Published Date: May 20, 2015
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : 
     IBM
By: Adaptive Computing     Published Date: Feb 21, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
By: Bright Computing     Published Date: Feb 22, 2016
Cloud computing offers organizations a clear opportunity to introduce operational efficiency that would be too difficult or costly to achieve internally. As such, we are continuing to see an increase in cloud adoption for workloads across the commercial market. However, recent research suggests that -- despite continued increases in companies reporting that they are using cloud computing -- the vast majority of corporate workloads remain on premise. It is clear that companies are carefully considering the security, privacy, and performance aspects of cloud transition and struggling to achieve cloud adoption with limited internal cloud expertise. Register to read more.
Tags : 
     Bright Computing
By: IBM     Published Date: Feb 13, 2015
A*Star had high levels of user discontent and not enough computational resources for the population of users or the number of research projects. Platform LSF acted as the single unifying workload scheduler and helped rapidly increase resource utilization.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
This major Hollywood studio wanted to improve the computer time required to render animated films. Using HPC solution powered by Platform LSF increased compute capacity allowing release of two major feature films and multiple animated shorts.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
University of East Anglia wished to create a “green” HPC resource, increase compute power and support research across multiple operating systems. Platform HPC increased compute power from 9 to 21.5 teraflops, cut power consumption rates and costs and provided flexible, responsive support.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
The new Clusters for Dummies, e-book from IBM Platform Computing explains how clustering technology enables you to run higher quality simulations and shorten the time to discoveries. In this e-book, you’ll discover how to: Make a cluster work for your business Create clusters using commodity components Use workload management software for reliable results Use cluster management software for simplified administration Learn from case studies of clusters in action With clustering technology you can increase your compute capacity, accelerate innovation process, shrink time to insights, and improve your productivity, all of which will lead to increased competitiveness for your business.
Tags : 
     IBM
By: Panasas     Published Date: Oct 02, 2014
HPC and technical computing environments require the collection, storage,and transmission of large-scale datasets. To meet these demands, datacenter architects must consider how increasing storage capacity over time will affect HPC workloads, performance, and system availability. While many enterprises have looked to scale-up NAS to meet their storage needs, this approach can lead to data islands that make it difficult to share data. Distributed, scale-out storage was developed to get around the technology limitations of scale-up NAS architectures.
Tags : 
     Panasas
By: Penguin Computing     Published Date: Oct 14, 2015
IT organizations are facing increasing pressure to deliver critical services to their users while their budgets are either reduced or maintained at current levels. New technologies have the potential to deliver industry-changing information to users who need data in real time, but only if the IT infrastructure is designed and implemented to do so. While computing power continues to decline in cost, the management of large data centers, together with the associated costs of running these data centers, increases. The server administration over the life of the computer asset will consume about 75 percent of the total cost.
Tags : 
     Penguin Computing
By: Data Direct Networks     Published Date: Dec 31, 2015
Using high performance parallel storage solutions, geologists and researchers can now incorporate larger data sets and execute more seismic and reservoir simulations faster than ever before, enabling higher fidelity geological analysis and significantly reduced exploration risk. With high costs of exploration, oil and gas companies are increasingly turning to high performance DDN storage solutions to eliminate I/O bottlenecks, minimize risk and costs, while delivering a larger number of higher fidelity simulations in same time as traditional storage architectures.
Tags : 
     Data Direct Networks
By: Data Direct Networks     Published Date: Dec 31, 2015
When it comes to generating increasingly larger data sets and stretching the limits of high performance computing (HPC), the fi eld of genomics and next generation sequencing (NGS) is in the forefront. The major impetus for this data explosion began in 1990 when the U.S. kicked off the Human Genome Project, an ambitious project designed to sequence the three billion base pairs that constitute the complete set of DNA in the human body. Eleven years and $3 billion later the deed was done. This breakthrough was followed by a massive upsurge in genomics research and development that included rapid advances in sequencing using the power of HPC. Today an individual’s genome can be sequenced overnight for less than $1,000.
Tags : 
     Data Direct Networks
By: Dell APAC     Published Date: May 29, 2019
Digital transformation (DX) is reaching a macroeconomic scale. DX business objectives are balanced between tactical and strategic objectives and range from improvement in operational efficiencies and customer satisfaction to increasing existing product revenue to improving profit margins to launching new digital revenue streams. Successful DX relies on utilizing data for services as well as converting data into actionable insights. This reliance on data is contributing to a new digital era. 3rd Platform (cloud, social, mobile, and Big Data) computing is the underpinning of DX worldwide. It enables collection of a vast breadth of data sets and delivers the agility and efficiency needed to accelerate DX
Tags : 
     Dell APAC
By: Dell APAC     Published Date: May 30, 2019
Digital transformation (DX) is reaching a macroeconomic scale. DX business objectives are balanced between tactical and strategic objectives and range from improvement in operational efficiencies and customer satisfaction to increasing existing product revenue to improving profit margins to launching new digital revenue streams. Successful DX relies on utilizing data for services as well as converting data into actionable insights. This reliance on data is contributing to a new digital era. 3rd Platform (cloud, social, mobile, and Big Data) computing is the underpinning of DX worldwide. It enables collection of a vast breadth of data sets and delivers the agility and efficiency needed to accelerate DX
Tags : 
     Dell APAC
By: Dell APAC     Published Date: May 30, 2019
Digital transformation (DX) is reaching a macroeconomic scale. DX business objectives are balanced between tactical and strategic objectives and range from improvement in operational efficiencies and customer satisfaction to increasing existing product revenue to improving profit margins to launching new digital revenue streams. Successful DX relies on utilizing data for services as well as converting data into actionable insights. This reliance on data is contributing to a new digital era. 3rd Platform (cloud, social, mobile, and Big Data) computing is the underpinning of DX worldwide. It enables collection of a vast breadth of data sets and delivers the agility and efficiency needed to accelerate DX
Tags : 
     Dell APAC
By: Dell APAC     Published Date: May 30, 2019
Digital transformation (DX) is reaching a macroeconomic scale. DX business objectives are balanced between tactical and strategic objectives and range from improvement in operational efficiencies and customer satisfaction to increasing existing product revenue to improving profit margins to launching new digital revenue streams. Successful DX relies on utilizing data for services as well as converting data into actionable insights. This reliance on data is contributing to a new digital era. 3rd Platform (cloud, social, mobile, and Big Data) computing is the underpinning of DX worldwide. It enables collection of a vast breadth of data sets and delivers the agility and efficiency needed to accelerate DX
Tags : 
     Dell APAC
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com