source

Results 1 - 25 of 3241Sort Results By: Published Date | Title | Company Name
By: Intel     Published Date: Aug 06, 2014
Purpose-built for use with the dynamic computing resources available from Amazon Web Services ™ the Intel Lustre* solution provides the fast, massively scalable storage software needed to accelerate performance, even on complex workloads. Intel is a driving force behind the development of Lustre, and committed to providing fast, scalable, and cost effective storage with added support and manageability. Intel ® Enterprise Edition for Lustre* software undation for dynamic AWS-based workloads. Now you can innovate on your problem, not your infrastructure
Tags : intel, cloud edition lustre software, scalable storage software
     Intel
By: Altair     Published Date: Feb 19, 2014
PBS Works™, Altair's suite of on-demand cloud computing technologies, allows enterprises to maximize ROI on existing infrastructure assets. PBS Works is the most widely implemented software environment for managing grid, cloud, and cluster computing resources worldwide. The suite’s flagship product, PBS Professional®, allows enterprises to easily share distributed computing resources across geographic boundaries. With additional tools for portal-based submission, analytics, and data management, the PBS Works suite is a comprehensive solution for optimizing HPC environments. Leveraging a revolutionary “pay-for-use” unit-based business model, PBS Works delivers increased value and flexibility over conventional software-licensing models.
Tags : 
     Altair
By: Bright Computing     Published Date: May 05, 2014
A successful HPC cluster is a powerful asset for an organization. The following essential strategies are guidelines for the effective operation of an HPC cluster resource: 1. Plan To Manage the Cost of Software Complexity 2. Plan for Scalable Growth 3. Plan to Manage Heterogeneous Hardware/Software Solutions 4. Be Ready for the Cloud 5. Have an answer for the Hadoop Question Bright Cluster Manager addresses the above strategies remarkably well and allows HPC and Hadoop clusters to be easily created, monitored, and maintained using a single comprehensive user interface. Administrators can focus on more sophisticated, value-adding tasks rather than developing homegrown solutions that may cause problems as clusters grow and change. The end result is an efficient and successful HPC cluster that maximizes user productivity.
Tags : bright computing, hpc clusters
     Bright Computing
By: IBM     Published Date: Jun 05, 2014
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
This demonstration shows how an organization using IBM Platform Computing workload managers can easily and securely tap resources in the IBM SoftLayer public cloud to handle periods of peak demand and reduce total IT infrastructure costs.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
Are infrastructure limitations holding you back? - Users struggling with access to sufficient compute resources - Resources tapped out during peak demand times - Lack of budget, space or power to the environment IBM recently announced a new Platform Computing cloud service delivering hybrid cloud optimized for analytics and technical computing applications. The offering provides: - Ready to run IBM Platform LSF & Symphony clusters in the cloud - Seamless workload management, on premise and in the cloud - 24x7 cloud operation technical support - Dedicated, isolated physical machines for complete security Join us for this brief 20-minute webcast to learn how IBM offers a complete end-to-end hybrid cloud solution that may be key to improving your organization’s effectiveness and expediting time to market for your products.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
IBM Platform LSF is a powerful workload management platform for demanding, distributed HPC environments. It provides a comprehensive set of intelligent, policy-driven scheduling features that enable you to utilize all of your compute infrastructure resources and ensure optimal application performance.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
IBM Platform Symphony is a high performance SOA grid server that optimizes application performance and resource sharing. Platform Symphony runs distributed application services on a scalable, shared, heterogeneous grid and accelerates a wide variety of parallel applications, quickly computing results while making optimal use of available infrastructure. Platform Symphony Developer Edition enables developers to rapidly develop and test applications without the need for a production grid. After applications are running in the Developer Edition, they are guaranteed to run at scale once published to a scaled-out Platform Symphony grid. Platform Symphony Developer Edition also enables developers to easily test and verify Hadoop MapReduce applications against IBM Platform Symphony. By leveraging IBM Platform's Symphony's proven, low-latency grid computing solution, more MapReduce jobs can run faster, frequently with less infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Aug 27, 2014
Are infrastructure limitations holding you back? Users struggling with access to sufficient compute resources Resources tapped out during peak demand times Lack of budget, space or power to the environment IBM recently announced a new Platform Computing cloud service delivering hybrid cloud optimized for analytics and technical computing applications. The offering provides: Ready to run IBM Platform LSF & Symphony clusters in the cloud Seamless workload management, on premise and in the cloud 24x7 cloud operation technical support Dedicated, isolated physical machines for complete security Join us for this brief 20-minute webcast to learn how IBM offers a complete end-to-end hybrid cloud solution that may be key to improving your organization’s effectiveness and expediting time to market for your products.
Tags : ibm, hybrid cloud
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Learn how organizations in cancer research, speech recognition, financial services, automotive design and more are using IBM solutions to improve business results. IBM Software Defined Infrastructure enables organizations to deliver IT services in the most efficient way possible, leveraging resource utilization to accelerate time to results and reduce costs. It is the foundation for a fully integrated software defined environment, optimizing compute, storage and networking infrastructure so organizations can quickly adapt to changing business requirements.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Docker is a lightweight Linux container technology built on top of LXC (LinuX Containers) and cgroup (control groups), which offers many attractive benefits for HPC environments. Find out more about how IBM Platform LSF® and Docker have been integrated outside the core of Platform LSF with a real world example involving the application BWA (bio-bwa.sourceforge.net). This step-by-step white paper provides details on how to get started with the IBM Platform LSF and Docker integration which is available via open beta on Service Management Connect.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Are you trying to support more variable workloads than your environment can handle? Can you benefit from a high performance cluster, but do not have the budget or resources to deploy and manage technical computing infrastructure? Are you running out of data center space but still need to grow your compute capacity? If you want answers to any of these questions, then please join us for an informative webinar describing the advantages and pitfalls of relocating a high performance workload to the cloud. View this webinar to learn: - Why general purpose clouds are insufficient for technical computing analytics and Hadoop workloads; - How high performance clouds can improve your profitability and give you a competitive edge; - How to ensure that your cloud environment is secure; - How to evaluate which applications are suitable for a hybrid or public cloud environment; - How to get started and choose a service provider.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Are infrastructure limitations holding you back? Users struggling with access to sufficient compute resources Resources tapped out during peak demand times Lack of budget, space or power to the environment IBM recently announced a new Platform Computing cloud service delivering hybrid cloud optimized for analytics and technical computing applications. The offering provides: Ready to run IBM Platform LSF & Symphony clusters in the cloud Seamless workload management, on premise and in the cloud 24x7 cloud operation technical support Dedicated, isolated physical machines for complete security Join us for this brief 20-minute webcast to learn how IBM offers a complete end-to-end hybrid cloud solution that may be key to improving your organization’s effectiveness and expediting time to market for your products.
Tags : 
     IBM
By: Adaptive Computing     Published Date: Feb 21, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
By: Seagate     Published Date: Feb 02, 2015
An introduction to the inner workings of the world’s most scalable and popular open source HPC file system
Tags : 
     Seagate
By: Seagate     Published Date: Sep 30, 2015
Although high-performance computing (HPC) often stands apart from a typical IT infrastructure—it uses highly specialized scale-out compute, networking and storage resources—it shares with mainstream IT the ability to push data center capacity to the breaking point. Much of this is due to data center inefficiencies caused by HPC storage growth. The Seagate® ClusterStor™ approach to scale-out HPC storage can significantly improve data center efficiency. No other vendor solution offers the same advantages.
Tags : 
     Seagate
By: IBM     Published Date: Nov 14, 2014
View this demo to learn how IBM Platform Computing Cloud Service running on the SoftLayer Cloud helps you: quickly get your applications deployed on ready-to-run clusters in the cloud; manage workloads seamlessly between on-premise and cloud-based resources; get help from the experts with 24x7 Support; share and manage data globally; and protect your IP through physical isolation of bare metal hardware assets.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
IBM® has created a proprietary implementation of the open-source Hadoop MapReduce run-time that leverages the IBM Platform™ Symphony distributed computing middleware while maintaining application-level compatibility with Apache Hadoop.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
The necessary compute power to drive modern biomedical research is a formidable and familiar challenge throughout the life sciences. Underlying infrastructures must evolve to keep pace with innovation. In such demanding HPC environments, Cloud computing technologies represent a powerful approach to managing technical computing resources. This paper shares valuable insight to life science centers leveraging cloud concepts to manage their infrastructure.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
Platform HPC enables HPC customers to side-step many overhead cost and support issues that often plague open-source environments and enable them to deploy powerful, easy to use clusters.
Tags : 
     IBM
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com