search

Results 1 - 25 of 3077Sort Results By: Published Date | Title | Company Name
By: Cray     Published Date: Jul 02, 2015
The Cray XC series is a distributed memory system developed as part of Cray’s participation in the Defense Advanced Research Projects Agency’s (DARPA) High Productivity Computing System (HPCS) program. Previously codenamed “Cascade,” the Cray XC system is capable of sustained multi-petaflops performance and features a hybrid architecture combining multiple processor technologies, a high performance network and a high performance operating system and programming environment.
Tags : 
     Cray
By: Altair     Published Date: Feb 19, 2014
The Weizmann Institute of Science is one of the world’s leading multidisciplinary research institutions. Hundreds of scientists, laboratory technicians and research students embark daily on fascinating journeys into the unknown, seeking to improve our understanding of nature and our place within it. Groundbreaking medical and technological applications that have emerged from basic research conducted by Weizmann Institute scientists include: amniocentesis, a prenatal diagnostic test; sophisticated laser systems for high-precision diamond cutting; living polymerization, one of the most fundamental techniques of the modern polymer industry; and ribosome structure analysis, for which the Institute’s Professor Ada Yonath was awarded a Nobel Prize in Chemistry.
Tags : 
     Altair
By: Altair     Published Date: Jul 15, 2014
NSCEE’s new workload management solution including PBS Professional has reduced overall runtimes for processing workload. Furthermore, NSCEE credits PBS Professional with improving system manageability and extensibility thanks to these key features: • Lightweight solution • Very easy to manage • Not dependent on any specific operating system • Can be easily extended by adding site-specific processing plugins/hooks To learn more click to read the full paper
Tags : 
     Altair
By: IBM     Published Date: Jun 05, 2014
As new research and engineering environments are expanded to include more powerful computers to run increasingly complex computer simulations, the management of these heterogeneous computing environments continues to increase in complexity as well. Integrated solutions that include the Intel® Many Integrated Cores (MIC) architecture can dramatically boost aggregate performance for highly-parallel applications.
Tags : 
     IBM
By: IBM     Published Date: Sep 02, 2014
Whether engaged in genome sequencing, drug design, product analysis or risk management, life sciences research needs high-performance technical environments with the ability to process massive amounts of data and support increasingly sophisticated simulations and analyses. Learn how IBM solutions such as IBM® Platform Computing™ high-performance cluster, grid and high-performance computing (HPC) cloud management software can help life sciences organizations transform and integrate their compute environments to develop products better, faster and at less expense.
Tags : ibm, life sciences, platform computing
     IBM
By: IBM     Published Date: Sep 02, 2014
This two year research initiative in collaboration with IBM focuses on key trends, best practices, challenges, and priorities in enterprise risk management and covers topics as diverse as culture, organizational structure, data, systems, and processes.
Tags : ibm, chartis, rick enabled enterprise
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Learn how organizations in cancer research, speech recognition, financial services, automotive design and more are using IBM solutions to improve business results. IBM Software Defined Infrastructure enables organizations to deliver IT services in the most efficient way possible, leveraging resource utilization to accelerate time to results and reduce costs. It is the foundation for a fully integrated software defined environment, optimizing compute, storage and networking infrastructure so organizations can quickly adapt to changing business requirements.
Tags : 
     IBM
By: SGI     Published Date: Mar 03, 2015
The SGI UV system is uniquely suited for bioinformatics and genomics by providing the computational capabilities and global shared memory architecture needed for even the most demanding sequencing and analytic tasks, including post sequencing and other data intensive workflows. Because of the systems outstanding speed and throughput, genomics researchers can perform very large jobs in less time, realizing a dramatically accelerated time-to-solution. And best of all, they can explore avenues of research that were computationally beyond the reach of HPC systems lacking the power and in-memory capabilities of the SGI UV.
Tags : 
     SGI
By: Bright Computing     Published Date: Feb 22, 2016
Cloud computing offers organizations a clear opportunity to introduce operational efficiency that would be too difficult or costly to achieve internally. As such, we are continuing to see an increase in cloud adoption for workloads across the commercial market. However, recent research suggests that -- despite continued increases in companies reporting that they are using cloud computing -- the vast majority of corporate workloads remain on premise. It is clear that companies are carefully considering the security, privacy, and performance aspects of cloud transition and struggling to achieve cloud adoption with limited internal cloud expertise. Register to read more.
Tags : 
     Bright Computing
By: IBM     Published Date: Nov 14, 2014
The necessary compute power to drive modern biomedical research is a formidable and familiar challenge throughout the life sciences. Underlying infrastructures must evolve to keep pace with innovation. In such demanding HPC environments, Cloud computing technologies represent a powerful approach to managing technical computing resources. This paper shares valuable insight to life science centers leveraging cloud concepts to manage their infrastructure.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
A*Star had high levels of user discontent and not enough computational resources for the population of users or the number of research projects. Platform LSF acted as the single unifying workload scheduler and helped rapidly increase resource utilization.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
University of East Anglia wished to create a “green” HPC resource, increase compute power and support research across multiple operating systems. Platform HPC increased compute power from 9 to 21.5 teraflops, cut power consumption rates and costs and provided flexible, responsive support.
Tags : 
     IBM
By: Dell and Intel®     Published Date: Mar 30, 2015
Dell has teamed with Intel® to create innovative solutions that can accelerate the research, diagnosis and treatment of diseases through personalized medicine. The combination of leading-edge Intel® Xeon® processors and the systems and storage expertise from Dell create a state-of-the-art data center solution that is easy to install, manage and expand as required. Labelled the Dell Genomic Data Analysis Platform (GDAP), this solution is designed to achieve fast results with maximum efficiency. The solution is architected to solve a number of customer challenges, including the perception that implementation must be large-scale in nature, compliance, security and clinician uses.
Tags : 
     Dell and Intel®
By: Dell and Intel®     Published Date: Nov 18, 2015
The NCSA Private Sector Program creates a high-performance computing cluster to help corporations overcome critical challenges. Through its Private Sector Program (PSP), NCSA has provided supercomputing, consulting, research, prototyping and development, and production services to more than one-third of the Fortune 50, in manufacturing, oil and gas, finance, retail/wholesale, bio/medical, life sciences, technology and other sectors. “We’re not the typical university supercomputer center, and PSP isn’t a typical group,” Giles says. “Our focus is on helping companies leverage highperformance computing in ways that make them more competitive.”
Tags : 
     Dell and Intel®
By: General Atomics     Published Date: Jan 13, 2015
The term “Big Data” has become virtually synonymous with “schema on read” (where data is applied to a plan or schema as it is ingested or pulled out of a stored location) unstructured data analysis and handling techniques like Hadoop. These “schema on read” techniques have been most famously exploited on relatively ephemeral human-readable data like retail trends, twitter sentiment, social network mining, log files, etc. But what if you have unstructured data that, on its own, is hugely valuable, enduring, and created at great expense? Data that may not immediately be human readable or indexable on search? Exactly the kind of data most commonly created and analyzed in science and HPC. Research institutions are awash with such data from large-scale experiments and extreme-scale computing that is used for high-consequence
Tags : general atomics, big data, metadata, nirvana
     General Atomics
By: HP     Published Date: Oct 08, 2015
Administrators, engineers and executives are now tasked with solving some of the world’s most complex challenges. This could revolve around advanced computations for science, business, education, pharmaceuticals and beyond. Here’s the challenge – many data centers are reaching peak levels of resource consumption; and there’s more work to be done. So how are engineers and scientists supposed to continue working around such high-demand applications? How can they continue to create ground-breaking research while still utilizing optimized infrastructure? How can a platform scale to the new needs and demands of these types of users and applications. This is where HP Apollo Systems help reinvent the modern data center and accelerate your business.
Tags : apollo systems, reinventing hpc and the supercomputer, reinventing modern data center
     HP
By: TYAN     Published Date: Jun 06, 2016
Organizations that implement high-performance computing (HPC) technologies have a wide range of requirements. From small manufacturing suppliers to national research institutions, using significant computing technologies is critical to creating innovative products and leading-edge research. No two HPC installations are the same. For maximum return, budget, software requirements, performance and customization all must be considered before installing and operating a successful environment.
Tags : 
     TYAN
By: Data Direct Networks     Published Date: Dec 31, 2015
Using high performance parallel storage solutions, geologists and researchers can now incorporate larger data sets and execute more seismic and reservoir simulations faster than ever before, enabling higher fidelity geological analysis and significantly reduced exploration risk. With high costs of exploration, oil and gas companies are increasingly turning to high performance DDN storage solutions to eliminate I/O bottlenecks, minimize risk and costs, while delivering a larger number of higher fidelity simulations in same time as traditional storage architectures.
Tags : 
     Data Direct Networks
By: Data Direct Networks     Published Date: Dec 31, 2015
When it comes to generating increasingly larger data sets and stretching the limits of high performance computing (HPC), the fi eld of genomics and next generation sequencing (NGS) is in the forefront. The major impetus for this data explosion began in 1990 when the U.S. kicked off the Human Genome Project, an ambitious project designed to sequence the three billion base pairs that constitute the complete set of DNA in the human body. Eleven years and $3 billion later the deed was done. This breakthrough was followed by a massive upsurge in genomics research and development that included rapid advances in sequencing using the power of HPC. Today an individual’s genome can be sequenced overnight for less than $1,000.
Tags : 
     Data Direct Networks
By: VMware     Published Date: Aug 17, 2014
Over the past several years, virtualization has made major inroads into enterprise IT infrastructures. And now it is moving into the realm of high performance computing (HPC), especially for such compute intensive applications as electronic design automation (EDA), life sciences, financial services and digital media entertainment.
Tags : vmware, virtualization
     VMware
By: Kindred Hospital Rehabilitation Services     Published Date: Aug 26, 2019
Research has demonstrated enhanced technology can improve communication between patients, families and care providers, improve motivation, and has the potential to effect better outcomes and higher levels of patient satisfaction. Additionally, better technology also makes the workplace more appealing to employees. With the investment and complexity involved, how can health systems utilize technology in the most efficient and effective ways to drive business results?
Tags : 
     Kindred Hospital Rehabilitation Services
By: SAP EMEA Global     Published Date: Sep 13, 2019
Questions posed by: SAP SuccessFactors Answers by Lisa Rowan, Research Vice President, HR, Talent, and Learning Strategies
Tags : 
     SAP EMEA Global
By: Oracle     Published Date: Sep 25, 2019
Research shows that legacy ERP 1.0 systems were not designed for usability and insight. More than three quarters of business leaders say their current ERP system doesn’t meet their requirements, let alone future plans 1. These systems lack modern best-practice capabilities needed to compete and grow. To enable today’s data-driven organization, the very foundation from which you are operating needs to be re-established; it needs to be “modernized”. Oracle’s goal is to help you navigate your own journey to modernization by sharing the knowledge we’ve gained working with many thousands of customers using both legacy and modern ERP systems. To that end, we’ve crafted this handbook outlining the fundamental characteristics that define modern ERP.
Tags : 
     Oracle
By: F5 Networks Singapore Pte Ltd     Published Date: Sep 19, 2019
"Every kind of online interaction—website visits, API calls to mobile apps, and others—is being attacked by bots. Whether it's fraud, scraping, spam, DDoS, espionage, shilling, or simply altering your SEO ranking, bots are wreaking havoc on websites as well as mobile and business applications. But that’s not all: they’re also messing with your business intelligence (BI). They can skew audience metrics, customer journeys and even ad buys, making business decisions questionable and costly. According to Forrester, ad fraud alone was set to exceed $3.3 billion in 2018. Not all bots are bad. In fact, your business depends on them. Search engine bots, for example, give your web presence visibility and authority online. Other good bots help you deliver better customer experiences—perhaps a chatbot provides instant customer assistance on your site. What’s important is enabling the good bots and blocking the bad ones."
Tags : 
     F5 Networks Singapore Pte Ltd
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com