guide

Results 1 - 25 of 3458Sort Results By: Published Date | Title | Company Name
By: InsideHPC Special Report     Published Date: Aug 17, 2016
A single issue has always defined the history of HPC systems: performance. While offloading and co-design may seem like new approaches to computing, they actually have been used, to a lesser degree, in the past as a way to enhance performance. Current co-design methods are now going deeper into cluster components than was previously possible. These new capabilities extend from the local cluster nodes into the “computing network.”
Tags : 
     InsideHPC Special Report
By: Bright Computing     Published Date: May 05, 2014
A successful HPC cluster is a powerful asset for an organization. The following essential strategies are guidelines for the effective operation of an HPC cluster resource: 1. Plan To Manage the Cost of Software Complexity 2. Plan for Scalable Growth 3. Plan to Manage Heterogeneous Hardware/Software Solutions 4. Be Ready for the Cloud 5. Have an answer for the Hadoop Question Bright Cluster Manager addresses the above strategies remarkably well and allows HPC and Hadoop clusters to be easily created, monitored, and maintained using a single comprehensive user interface. Administrators can focus on more sophisticated, value-adding tasks rather than developing homegrown solutions that may cause problems as clusters grow and change. The end result is an efficient and successful HPC cluster that maximizes user productivity.
Tags : bright computing, hpc clusters
     Bright Computing
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: SGI     Published Date: Mar 03, 2015
The SGI UV system is uniquely suited for bioinformatics and genomics by providing the computational capabilities and global shared memory architecture needed for even the most demanding sequencing and analytic tasks, including post sequencing and other data intensive workflows. Because of the systems outstanding speed and throughput, genomics researchers can perform very large jobs in less time, realizing a dramatically accelerated time-to-solution. And best of all, they can explore avenues of research that were computationally beyond the reach of HPC systems lacking the power and in-memory capabilities of the SGI UV.
Tags : 
     SGI
By: SGI     Published Date: Jun 08, 2016
With High Performance Computing (HPC) supercomputer systems that comprise tens, hundreds, or even thousands of computing cores, users are able to increase application performance and accelerate their workflows to realize dramatic productivity improvements. The performance potential often comes at the cost of complexity. By their very nature, supercomputers comprise a great number of components, both hardware and software, that must be installed, configured, tuned, and monitored to maintain maximum efficiency. In a recent report, IDC lists downtime and latency as two of the most important problems faced by data center managers.
Tags : 
     SGI
By: Penguin Computing     Published Date: Mar 23, 2015
The Open Compute Project, initiated by Facebook as a way to increase computing power while lowering associated costs with hyper-scale computing, has gained a significant industry following. While the initial specifications were created for a Web 2.0 environment, Penguin Computing has adapted these concepts to create a complete hardware ecosystem solution that addresses these needs and more. The Tundra OpenHPC system is applicable to a wide range of HPC challenges and delivers the most requested features for data center architects.
Tags : penguin computing, open computing, computing power, hyper-scale computing, tundra openhpc
     Penguin Computing
By: Seagate     Published Date: Sep 30, 2015
In all things, but especially in high-performance computing (HPC), the details matter. In HPC, the most important detail is drive efficiency, especially when analyzing storage performance. This fundamental metric is easy to calculate and can have a huge impact on HPC storage performance. In this guide, we’ll look at why conventional HPC storage architectures do so poorly at drive-level throughput and what that means to an organization’s ability to realize the full potential of its HPC investments. We’ll also explain why ClusterStor™ from Seagate® excels in this arena.
Tags : 
     Seagate
By: Seagate     Published Date: Sep 30, 2015
In all things, but especially in high-performance computing (HPC), the details matter. In HPC, the most important detail is drive efficiency, especially when analyzing storage performance. This fundamental metric is easy to calculate and can have a huge impact on HPC storage performance. In this guide, we’ll look at why conventional HPC storage architectures do so poorly at drive-level throughput and what that means to an organization’s ability to realize the full potential of its HPC investments. We’ll also explain why ClusterStor™ from Seagate® excels in this arena.
Tags : 
     Seagate
By: NVIDIA & Bright Computing     Published Date: Sep 01, 2015
As of June 2015, the second fastest computer in the world, as measured by the Top500 list employed NVIDIA® GPUs. Of those systems on the same list that use accelerators 60% use NVIDIA GPUs. The performance kick provided by computing accelerators has pushed High Performance Computing (HPC) to new levels. When discussing GPU accelerators, the focus is often on the price-toperformance benefits to the end user. The true cost of managing and using GPUs goes far beyond the hardware price, however. Understanding and managing these costs helps provide more efficient and productive systems.
Tags : 
     NVIDIA & Bright Computing
By: Intel     Published Date: Sep 16, 2014
In this Guide, we take a look at what an HPC solution like Lustre can deliver for a broad community of business and commercial organizations struggling with the challenge of big data and demanding storage growth.
Tags : intel, lustre*, solution for business
     Intel
By: Intel     Published Date: Apr 01, 2016
Since its beginnings in 1999 as a project at Carnegie Mellon University, Lustre, the highperformance parallel file system, has come a long, long way. Designed and always focusing on performance and scalability, it is now part of nearly every High Performance Computing (HPC) cluster on the Top500.org’s list of fastest computers in the world—present in 70 percent of the top 100 and nine out of the top ten. That’s an achievement for any developer—or community of developers, in the case of Lustre—to be proud of.
Tags : 
     Intel
By: Panasas     Published Date: Oct 02, 2014
HPC and technical computing environments require the collection, storage,and transmission of large-scale datasets. To meet these demands, datacenter architects must consider how increasing storage capacity over time will affect HPC workloads, performance, and system availability. While many enterprises have looked to scale-up NAS to meet their storage needs, this approach can lead to data islands that make it difficult to share data. Distributed, scale-out storage was developed to get around the technology limitations of scale-up NAS architectures.
Tags : 
     Panasas
By: Dell and Intel®     Published Date: Mar 30, 2015
Dell has teamed with Intel® to create innovative solutions that can accelerate the research, diagnosis and treatment of diseases through personalized medicine. The combination of leading-edge Intel® Xeon® processors and the systems and storage expertise from Dell create a state-of-the-art data center solution that is easy to install, manage and expand as required. Labelled the Dell Genomic Data Analysis Platform (GDAP), this solution is designed to achieve fast results with maximum efficiency. The solution is architected to solve a number of customer challenges, including the perception that implementation must be large-scale in nature, compliance, security and clinician uses.
Tags : 
     Dell and Intel®
By: SGI     Published Date: Mar 25, 2016
This paper offers those considering HPC, both users and managers, guidance when considering the best way to deploy an HPC solution. Three important questions are suggested that help determine the most appropriate HPC design (scale-up or scale out) that meets your goal and accelerates your discoveries.
Tags : 
     SGI
By: Penguin Computing     Published Date: Oct 14, 2015
IT organizations are facing increasing pressure to deliver critical services to their users while their budgets are either reduced or maintained at current levels. New technologies have the potential to deliver industry-changing information to users who need data in real time, but only if the IT infrastructure is designed and implemented to do so. While computing power continues to decline in cost, the management of large data centers, together with the associated costs of running these data centers, increases. The server administration over the life of the computer asset will consume about 75 percent of the total cost.
Tags : 
     Penguin Computing
By: HP     Published Date: Oct 08, 2015
Administrators, engineers and executives are now tasked with solving some of the world’s most complex challenges. This could revolve around advanced computations for science, business, education, pharmaceuticals and beyond. Here’s the challenge – many data centers are reaching peak levels of resource consumption; and there’s more work to be done. So how are engineers and scientists supposed to continue working around such high-demand applications? How can they continue to create ground-breaking research while still utilizing optimized infrastructure? How can a platform scale to the new needs and demands of these types of users and applications. This is where HP Apollo Systems help reinvent the modern data center and accelerate your business.
Tags : apollo systems, reinventing hpc and the supercomputer, reinventing modern data center
     HP
By: TYAN     Published Date: Jun 06, 2016
Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.
Tags : 
     TYAN
By: HPE     Published Date: Jul 21, 2016
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information
Tags : 
     HPE
By: VMware     Published Date: Aug 17, 2014
Over the past several years, virtualization has made major inroads into enterprise IT infrastructures. And now it is moving into the realm of high performance computing (HPC), especially for such compute intensive applications as electronic design automation (EDA), life sciences, financial services and digital media entertainment.
Tags : vmware, virtualization
     VMware
By: SGI     Published Date: Mar 25, 2016
This paper offers those considering HPC, both users and managers, guidance when considering the best way to deploy an HPC solution. Three important questions are suggested that help determine the most appropriate HPC design (scale-up or scale out) that meets your goal and accelerates your discoveries.
Tags : 
     SGI
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
Powerful IT doesn’t have to be complicated. Hyperconvergence puts your entire virtualized infrastructure and advanced data services into one integrated powerhouse. Deploy HCI on an intelligent fabric that can scale with your business and you can hyperconverge the entire IT stack. This guide will help you: Understand the basic tenets of hyperconvergence and the software-defined data center; Solve for common virtualization roadblocks; Identify 3 things modern organizations want from IT; Apply 7 hyperconverged tactics to your existing infrastructure now.
Tags : 
     Hewlett Packard Enterprise
By: Zendesk     Published Date: Jan 04, 2019
Seus clientes eles entendem mais de tecnologia do que nunca e preferem a abordagem "faça você mesmo" para resolver seus problemas e responder às suas próprias dúvidas. Anos de pesquisas feitas pelo ICMI confirmaram que os clientes preferem resolver seus problemas por conta própria e usando seus canais favoritos. Além disso, os clientes só procuram interações diretas quando esgotam todas as opções de autoatendimento. Essa informação se baseia em dados da American Express, que descobriu que 48% dos consumidores preferem falar com um representante do atendimento ao cliente para lidar com problemas complexos, mas apenas 16% preferem esse mesmo contato para resolver problemas simples. O objetivo desse documento é simples: queremos ajudá-lo a criar uma base de conhecimento, uma comunidade e um portal do cliente completos. E você pode fazer tudo isso usando a Central de Ajuda do Zendesk Guide.
Tags : 
     Zendesk
By: Evariant     Published Date: Nov 08, 2018
Smarter growth for healthcare providers requires a strong patient acquisition engine that can demonstrate ROI. By focusing on the financial impact of marketing efforts, healthcare marketers demonstrates why these efforts warrant further investment. This guide presents the Evariant approach to calculating service line ROI from marketing spend. It includes a standard set of formulas and optional factors that may be relevant to certain service lines and system-wide operations.
Tags : healthcare marketing, marketing roi, patient acquisition, marketing measurement
     Evariant
By: Evariant     Published Date: Nov 08, 2018
Healthcare CRM allow marketers to implement precision marketing techniques to target patients most likely to need a service, align to and improve the patient journey, and engage patients to drive loyalty and retention. Every hospital and healthcare system benefits from a correctly implemented CRM solution as it helps organizations build engaged and loyal audiences. Download this guide to learn to improve your marketing engine through the use of a healthcare CRM.
Tags : healthcare crm, patient acquisition, precision marketing, healthcare consumer data
     Evariant
By: Evariant     Published Date: Nov 08, 2018
Many healthcare executives initially looked to their HER/EMR systems and patient portals to engage patients. But these technologies are not the complete solution. This eBook explores what your enterprise tech stack needs to look like to properly find, guide and keep patients for life through meaningful engagement.
Tags : patient engagement, patient loyalty, patient retention, healthcare crm, electronic medical records
     Evariant
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com