red hat

Results 1 - 25 of 2225Sort Results By: Published Date | Title | Company Name
By: Altair     Published Date: Feb 19, 2014
The Weizmann Institute of Science is one of the world’s leading multidisciplinary research institutions. Hundreds of scientists, laboratory technicians and research students embark daily on fascinating journeys into the unknown, seeking to improve our understanding of nature and our place within it. Groundbreaking medical and technological applications that have emerged from basic research conducted by Weizmann Institute scientists include: amniocentesis, a prenatal diagnostic test; sophisticated laser systems for high-precision diamond cutting; living polymerization, one of the most fundamental techniques of the modern polymer industry; and ribosome structure analysis, for which the Institute’s Professor Ada Yonath was awarded a Nobel Prize in Chemistry.
Tags : 
     Altair
By: Bright Computing     Published Date: May 05, 2014
A successful HPC cluster is a powerful asset for an organization. The following essential strategies are guidelines for the effective operation of an HPC cluster resource: 1. Plan To Manage the Cost of Software Complexity 2. Plan for Scalable Growth 3. Plan to Manage Heterogeneous Hardware/Software Solutions 4. Be Ready for the Cloud 5. Have an answer for the Hadoop Question Bright Cluster Manager addresses the above strategies remarkably well and allows HPC and Hadoop clusters to be easily created, monitored, and maintained using a single comprehensive user interface. Administrators can focus on more sophisticated, value-adding tasks rather than developing homegrown solutions that may cause problems as clusters grow and change. The end result is an efficient and successful HPC cluster that maximizes user productivity.
Tags : bright computing, hpc clusters
     Bright Computing
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: SGI     Published Date: Mar 03, 2015
The SGI UV system is uniquely suited for bioinformatics and genomics by providing the computational capabilities and global shared memory architecture needed for even the most demanding sequencing and analytic tasks, including post sequencing and other data intensive workflows. Because of the systems outstanding speed and throughput, genomics researchers can perform very large jobs in less time, realizing a dramatically accelerated time-to-solution. And best of all, they can explore avenues of research that were computationally beyond the reach of HPC systems lacking the power and in-memory capabilities of the SGI UV.
Tags : 
     SGI
By: SGI     Published Date: Nov 17, 2015
In the pantheon of HPC grand challenges, weather forecasting and long-term climate simulation rank right up there with the most complex and com- putationally demanding problems in astrophysics, aeronautics, fusion power, exotic materials,and earthquake prediction, to name just a few. Modern weather prediction requires cooperation in the collection of observed data and sharing of forecasts output among all nations, a collabora- tion that has been ongoing for decades. This data is used to simulate effects on a range of scales— from events, such as the path of tornados, that change from minute to minute and move over distances measured in meters, to turnover of water layers in the ocean, a process that is measured in decades or even hundreds of years, and spans thousands of miles. The amount of data collected is staggering. Hundreds of thousands of surface stations, including airborne radiosondes, ships and buoys, aircraft, and dozens of weather satellites, are streaming terabytes of i
Tags : 
     SGI
By: SGI     Published Date: Jun 08, 2016
With High Performance Computing (HPC) supercomputer systems that comprise tens, hundreds, or even thousands of computing cores, users are able to increase application performance and accelerate their workflows to realize dramatic productivity improvements. The performance potential often comes at the cost of complexity. By their very nature, supercomputers comprise a great number of components, both hardware and software, that must be installed, configured, tuned, and monitored to maintain maximum efficiency. In a recent report, IDC lists downtime and latency as two of the most important problems faced by data center managers.
Tags : 
     SGI
By: IBM     Published Date: May 20, 2015
Assembling a clustered environment can be complex due to the many software components that are required to enable technical and high performance computing (HPC) applications to run effectively. This webcast will demonstrate how IBM Platform Computing products simplify cluster deployment, use and management, bringing high performance capabilities to both experienced and new HPC administrators and users. We will also present examples of companies utilizing Platform Computing software today, to improve performance and utilization of their HPC environment while reducing costs.
Tags : 
     IBM
By: SGI     Published Date: Jan 07, 2015
After a long dry spell triggered in part by the global downturn in the economy, manufacturing is enjoying an economic and technological resurgence. According to the Institute of Supply Management1 (ISM), American manufacturing continues to improve. The ISM recently reported that manufacturing activity expanded in September 2014 for the 16th consecutive month, and that the overall national economy grew for the 64th consecutive month. Part of this growth is being fueled by the adoption of computer aided engineering (CAE) and analysis solutions powered by high performance computing (HPC)—especially by the Tier One manufacturers. HPC is beginning to make some inroads into the ranks of the small to medium sized manufacturers (SMMs), but the going is slow.
Tags : 
     SGI
By: SGI     Published Date: Feb 26, 2015
After a long dry spell triggered in part by the global downturn in the economy, manufacturing is enjoying an economic and technological resurgence. According to the Institute of Supply Management1 (ISM), American manufacturing continues to improve. The ISM recently reported that manufacturing activity expanded in September 2014 for the 16th consecutive month, and that the overall national economy grew for the 64th consecutive month. Part of this growth is being fueled by the adoption of computer aided engineering (CAE) and analysis solutions powered by high performance computing (HPC)—especially by the Tier One manufacturers. HPC is beginning to make some inroads into the ranks of the small to medium sized manufacturers (SMMs), but the going is slow.
Tags : 
     SGI
By: NVIDIA & Bright Computing     Published Date: Sep 01, 2015
As of June 2015, the second fastest computer in the world, as measured by the Top500 list employed NVIDIA® GPUs. Of those systems on the same list that use accelerators 60% use NVIDIA GPUs. The performance kick provided by computing accelerators has pushed High Performance Computing (HPC) to new levels. When discussing GPU accelerators, the focus is often on the price-toperformance benefits to the end user. The true cost of managing and using GPUs goes far beyond the hardware price, however. Understanding and managing these costs helps provide more efficient and productive systems.
Tags : 
     NVIDIA & Bright Computing
By: IBM     Published Date: Nov 14, 2014
IBM® has created a proprietary implementation of the open-source Hadoop MapReduce run-time that leverages the IBM Platform™ Symphony distributed computing middleware while maintaining application-level compatibility with Apache Hadoop.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
A high performance computing (HPC) cluster refers to a group of servers built from off-the-shelf components that are connected via certain interconnect technologies. A cluster can deliver aggregated computing power from its many processors with many cores — sometimes hundreds, even thousands — to meet the processing demands of more complex engineering software, and therefore deliver results faster than individual workstations. If your company is in the majority that could benefit from access to more computing power, a cluster comprised of commodity servers may be a viable solution to consider, especially now that they’re easier to purchase, deploy, configure and maintain than ever before. Read more and learn about the '5 Easy Steps to a High Performance Cluster'.
Tags : 
     IBM
By: Dell and Intel®     Published Date: Mar 30, 2015
Dell has teamed with Intel® to create innovative solutions that can accelerate the research, diagnosis and treatment of diseases through personalized medicine. The combination of leading-edge Intel® Xeon® processors and the systems and storage expertise from Dell create a state-of-the-art data center solution that is easy to install, manage and expand as required. Labelled the Dell Genomic Data Analysis Platform (GDAP), this solution is designed to achieve fast results with maximum efficiency. The solution is architected to solve a number of customer challenges, including the perception that implementation must be large-scale in nature, compliance, security and clinician uses.
Tags : 
     Dell and Intel®
By: General Atomics     Published Date: Jan 13, 2015
The term “Big Data” has become virtually synonymous with “schema on read” (where data is applied to a plan or schema as it is ingested or pulled out of a stored location) unstructured data analysis and handling techniques like Hadoop. These “schema on read” techniques have been most famously exploited on relatively ephemeral human-readable data like retail trends, twitter sentiment, social network mining, log files, etc. But what if you have unstructured data that, on its own, is hugely valuable, enduring, and created at great expense? Data that may not immediately be human readable or indexable on search? Exactly the kind of data most commonly created and analyzed in science and HPC. Research institutions are awash with such data from large-scale experiments and extreme-scale computing that is used for high-consequence
Tags : general atomics, big data, metadata, nirvana
     General Atomics
By: AMD     Published Date: Nov 09, 2015
Graphics Processing Units (GPUs) have become a compelling technology for High Performance Computing (HPC), delivering exceptional performance per watt and impressive densities for data centers. AMD has partnered up with Hewlett Packard Enterprise to offer compelling solutions to drive your HPC workloads to new levels of performance. Learn about the awe-inspiring performance and energy efficiency of the AMD FirePro™ S9150, found in multiple HPE servers including the popular 2U HPE ProLiant DL380 Gen9 server. See why open standards matter for HPC, and what AMD is doing in this area. Click here to read more on AMD FirePro™ server GPUs for HPE Proliant servers
Tags : 
     AMD
By: HPE     Published Date: Jul 21, 2016
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information
Tags : 
     HPE
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
"Extracting value from data is central to the digital transformation required for businesses to succeed in the decades to come. Buried in data are insights that reveals what your customers need and how they want to receive it, how sales, manufacturing, distribution, and other aspects of business operations are functioning, what risks are arising to threaten the business, and more. That insight empowers your businesses to reach new customers, develop and deliver new products, to operate more efficiently and more effectively, and even to develop new business models. "
Tags : 
     Hewlett Packard Enterprise
By: Hewlett Packard Enterprise     Published Date: Jan 31, 2019
With the maturing of the all-flash array (AFA) market, the established market leaders in this space are turning their attention to other ways to differentiate themselves from their competition besides just product functionality. Consciously designing and driving a better customer experience (CX) is a strategy being pursued by many of these vendors.This white paper defines cloud-based predictive analytics and discusses evolving storage requirements that are driving their use and takes a look at how these platforms are being used to drive incremental value for public sector organizations in the areas of performance, availability, management, recovery, and information technology (IT) infrastructure planning.
Tags : 
     Hewlett Packard Enterprise
By: Avanade DACH     Published Date: Jan 08, 2019
To compete in today’s world, business leaders are placing increased demands on IT. Unfortunately, many IT departments are not able to deliver future innovation with their current infrastructure, applications and processes. To meet these demands, IT must digitally transform the enterprise through the adoption of cloud native practices, allowing them to both optimize and transform their existing infrastructure and applications. Recent Avanade research supports this thinking, finding that 88% of senior IT decision-makers believe that IT modernization is crucial to addressing the emerging requirements of the digital business1 . On the upside, those surveyed also indicated that by modernizing their IT infrastructures they expect to deliver real business results, such as boosting annual revenue by 14%, while at the same time reducing business operating costs by 13%1 . For many, this sounds like a winning strategy but what does it mean to adopt cloud native approaches, and how does it impact
Tags : 
     Avanade  DACH
By: Sage EMEA     Published Date: Jan 29, 2019
Enterprises must continuously change to keep ahead of the competition, reduce silos, improve connectivity and respond rapidly to a changing world. Organisations also need to drive continuous innovation with technology that helps them adapt faster. So if you’re thinking of replacing your legacy ERP system, start by asking yourself these three essential questions:
Tags : 
     Sage EMEA
By: Rackspace     Published Date: Feb 12, 2019
The decision to move business workloads and applications to the cloud impacts all parts of the business and isn’t a decision isolated to the IT team. Our latest research study on the different motives, concerns and experiences of executive peers and business stakeholders when securing buy-in for a strategic IT move found 97% of C-level executives in ANZ suffered from migration regret during their first cloud migration. Packed with telling hindsight, over 200 c-suite executives shared their expectations and experiences during the cloud migration journey. They reveal what they would have done differently - namely enhanced communication and a clear plan of action - and offer practical advice to help others get buy-in internally and secure funding to support a move to the cloud.
Tags : 
     Rackspace
By: Larsen & Toubro Infotech     Published Date: Jan 24, 2019
All native and ‘transitioning’ media companies are focusing heavily on content to save existing businesses, or building new business models, or both. Television broadcasters, wary of the growing cord-cutting, are spending large sums on premium content. In 2017, the top four media companies spent more than USD 34 billion on original and acquired non-sports programming. Pure-play OTT providers have, on the other hand, bet big on content to shore up on subscribers. Netflix alone spent more than USD 6 billion on content last year, while spend was USD 7 billion for Amazon and Hulu combined. Transitioning media companies, such as telecom and technology companies that are moving towards being a media company, are also allocating sizable funds for content in their quest to explore supplementary businesses, by boosting customer engagement on their platforms. Apple and Facebook have started creating their own original content, and spend is only going to expand further.
Tags : 
     Larsen & Toubro Infotech
By: HERE Technologies     Published Date: Feb 13, 2019
Discover the four big trends in fleet management being powered by location services. Trends to help you differentiate your solutions and enable transportation companies to overcome their logistical challenges and increase asset utilization. Discover what’s making the biggest impact, together with how, by integrating some of these trends into your solutions, you can position yourself as the service provider of choice in fleet and transportation management solutions. And find out how HERE is delivering features, from comprehensive mapping capabilities and real-time location data, to truck-specific attributes, to help you do just that. Download the eBook now
Tags : location data, transport & logistics, location services
     HERE Technologies
By: Fujitsu America, Inc.     Published Date: Jan 22, 2019
Intel, the Intel logo, Intel Core, Intel vPro, Core Inside and vPro Inside are trademarks of Intel Corporation or its subsidiaries in the U.S. and/or other countries. The diversity and bureaucratic nature of government agencies have complicated communication for decades. But today digital technologies offer a path to connectivity and information sharing that could help break the gridlock. Innovative mobile and field force automation (FFA) technologies are helping state and federal government agencies break down the walls that hindered cooperation and decision making between offices and field personnel. This document is designed to help ensure that your mobile device platform selection and processes provide the power, reliability, and flexibility you need to achieve your mission — in the office, in the field, or ten flights up in the air.
Tags : 
     Fujitsu America, Inc.
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com