red hat

Results 1 - 25 of 2627Sort Results By: Published Date | Title | Company Name
By: Altair     Published Date: Feb 19, 2014
The Weizmann Institute of Science is one of the world’s leading multidisciplinary research institutions. Hundreds of scientists, laboratory technicians and research students embark daily on fascinating journeys into the unknown, seeking to improve our understanding of nature and our place within it. Groundbreaking medical and technological applications that have emerged from basic research conducted by Weizmann Institute scientists include: amniocentesis, a prenatal diagnostic test; sophisticated laser systems for high-precision diamond cutting; living polymerization, one of the most fundamental techniques of the modern polymer industry; and ribosome structure analysis, for which the Institute’s Professor Ada Yonath was awarded a Nobel Prize in Chemistry.
Tags : 
     Altair
By: Bright Computing     Published Date: May 05, 2014
A successful HPC cluster is a powerful asset for an organization. The following essential strategies are guidelines for the effective operation of an HPC cluster resource: 1. Plan To Manage the Cost of Software Complexity 2. Plan for Scalable Growth 3. Plan to Manage Heterogeneous Hardware/Software Solutions 4. Be Ready for the Cloud 5. Have an answer for the Hadoop Question Bright Cluster Manager addresses the above strategies remarkably well and allows HPC and Hadoop clusters to be easily created, monitored, and maintained using a single comprehensive user interface. Administrators can focus on more sophisticated, value-adding tasks rather than developing homegrown solutions that may cause problems as clusters grow and change. The end result is an efficient and successful HPC cluster that maximizes user productivity.
Tags : bright computing, hpc clusters
     Bright Computing
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: SGI     Published Date: Mar 03, 2015
The SGI UV system is uniquely suited for bioinformatics and genomics by providing the computational capabilities and global shared memory architecture needed for even the most demanding sequencing and analytic tasks, including post sequencing and other data intensive workflows. Because of the systems outstanding speed and throughput, genomics researchers can perform very large jobs in less time, realizing a dramatically accelerated time-to-solution. And best of all, they can explore avenues of research that were computationally beyond the reach of HPC systems lacking the power and in-memory capabilities of the SGI UV.
Tags : 
     SGI
By: SGI     Published Date: Nov 17, 2015
In the pantheon of HPC grand challenges, weather forecasting and long-term climate simulation rank right up there with the most complex and com- putationally demanding problems in astrophysics, aeronautics, fusion power, exotic materials,and earthquake prediction, to name just a few. Modern weather prediction requires cooperation in the collection of observed data and sharing of forecasts output among all nations, a collabora- tion that has been ongoing for decades. This data is used to simulate effects on a range of scales— from events, such as the path of tornados, that change from minute to minute and move over distances measured in meters, to turnover of water layers in the ocean, a process that is measured in decades or even hundreds of years, and spans thousands of miles. The amount of data collected is staggering. Hundreds of thousands of surface stations, including airborne radiosondes, ships and buoys, aircraft, and dozens of weather satellites, are streaming terabytes of i
Tags : 
     SGI
By: SGI     Published Date: Jun 08, 2016
With High Performance Computing (HPC) supercomputer systems that comprise tens, hundreds, or even thousands of computing cores, users are able to increase application performance and accelerate their workflows to realize dramatic productivity improvements. The performance potential often comes at the cost of complexity. By their very nature, supercomputers comprise a great number of components, both hardware and software, that must be installed, configured, tuned, and monitored to maintain maximum efficiency. In a recent report, IDC lists downtime and latency as two of the most important problems faced by data center managers.
Tags : 
     SGI
By: IBM     Published Date: May 20, 2015
Assembling a clustered environment can be complex due to the many software components that are required to enable technical and high performance computing (HPC) applications to run effectively. This webcast will demonstrate how IBM Platform Computing products simplify cluster deployment, use and management, bringing high performance capabilities to both experienced and new HPC administrators and users. We will also present examples of companies utilizing Platform Computing software today, to improve performance and utilization of their HPC environment while reducing costs.
Tags : 
     IBM
By: SGI     Published Date: Jan 07, 2015
After a long dry spell triggered in part by the global downturn in the economy, manufacturing is enjoying an economic and technological resurgence. According to the Institute of Supply Management1 (ISM), American manufacturing continues to improve. The ISM recently reported that manufacturing activity expanded in September 2014 for the 16th consecutive month, and that the overall national economy grew for the 64th consecutive month. Part of this growth is being fueled by the adoption of computer aided engineering (CAE) and analysis solutions powered by high performance computing (HPC)—especially by the Tier One manufacturers. HPC is beginning to make some inroads into the ranks of the small to medium sized manufacturers (SMMs), but the going is slow.
Tags : 
     SGI
By: SGI     Published Date: Feb 26, 2015
After a long dry spell triggered in part by the global downturn in the economy, manufacturing is enjoying an economic and technological resurgence. According to the Institute of Supply Management1 (ISM), American manufacturing continues to improve. The ISM recently reported that manufacturing activity expanded in September 2014 for the 16th consecutive month, and that the overall national economy grew for the 64th consecutive month. Part of this growth is being fueled by the adoption of computer aided engineering (CAE) and analysis solutions powered by high performance computing (HPC)—especially by the Tier One manufacturers. HPC is beginning to make some inroads into the ranks of the small to medium sized manufacturers (SMMs), but the going is slow.
Tags : 
     SGI
By: NVIDIA & Bright Computing     Published Date: Sep 01, 2015
As of June 2015, the second fastest computer in the world, as measured by the Top500 list employed NVIDIA® GPUs. Of those systems on the same list that use accelerators 60% use NVIDIA GPUs. The performance kick provided by computing accelerators has pushed High Performance Computing (HPC) to new levels. When discussing GPU accelerators, the focus is often on the price-toperformance benefits to the end user. The true cost of managing and using GPUs goes far beyond the hardware price, however. Understanding and managing these costs helps provide more efficient and productive systems.
Tags : 
     NVIDIA & Bright Computing
By: IBM     Published Date: Nov 14, 2014
IBM® has created a proprietary implementation of the open-source Hadoop MapReduce run-time that leverages the IBM Platform™ Symphony distributed computing middleware while maintaining application-level compatibility with Apache Hadoop.
Tags : 
     IBM
By: IBM     Published Date: Nov 14, 2014
A high performance computing (HPC) cluster refers to a group of servers built from off-the-shelf components that are connected via certain interconnect technologies. A cluster can deliver aggregated computing power from its many processors with many cores — sometimes hundreds, even thousands — to meet the processing demands of more complex engineering software, and therefore deliver results faster than individual workstations. If your company is in the majority that could benefit from access to more computing power, a cluster comprised of commodity servers may be a viable solution to consider, especially now that they’re easier to purchase, deploy, configure and maintain than ever before. Read more and learn about the '5 Easy Steps to a High Performance Cluster'.
Tags : 
     IBM
By: Dell and Intel®     Published Date: Mar 30, 2015
Dell has teamed with Intel® to create innovative solutions that can accelerate the research, diagnosis and treatment of diseases through personalized medicine. The combination of leading-edge Intel® Xeon® processors and the systems and storage expertise from Dell create a state-of-the-art data center solution that is easy to install, manage and expand as required. Labelled the Dell Genomic Data Analysis Platform (GDAP), this solution is designed to achieve fast results with maximum efficiency. The solution is architected to solve a number of customer challenges, including the perception that implementation must be large-scale in nature, compliance, security and clinician uses.
Tags : 
     Dell and Intel®
By: General Atomics     Published Date: Jan 13, 2015
The term “Big Data” has become virtually synonymous with “schema on read” (where data is applied to a plan or schema as it is ingested or pulled out of a stored location) unstructured data analysis and handling techniques like Hadoop. These “schema on read” techniques have been most famously exploited on relatively ephemeral human-readable data like retail trends, twitter sentiment, social network mining, log files, etc. But what if you have unstructured data that, on its own, is hugely valuable, enduring, and created at great expense? Data that may not immediately be human readable or indexable on search? Exactly the kind of data most commonly created and analyzed in science and HPC. Research institutions are awash with such data from large-scale experiments and extreme-scale computing that is used for high-consequence
Tags : general atomics, big data, metadata, nirvana
     General Atomics
By: AMD     Published Date: Nov 09, 2015
Graphics Processing Units (GPUs) have become a compelling technology for High Performance Computing (HPC), delivering exceptional performance per watt and impressive densities for data centers. AMD has partnered up with Hewlett Packard Enterprise to offer compelling solutions to drive your HPC workloads to new levels of performance. Learn about the awe-inspiring performance and energy efficiency of the AMD FirePro™ S9150, found in multiple HPE servers including the popular 2U HPE ProLiant DL380 Gen9 server. See why open standards matter for HPC, and what AMD is doing in this area. Click here to read more on AMD FirePro™ server GPUs for HPE Proliant servers
Tags : 
     AMD
By: HPE     Published Date: Jul 21, 2016
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information
Tags : 
     HPE
By: VMware     Published Date: Sep 12, 2019
You’ve heard the stories: a large Internet company exposing all three billion of its customer accounts; a major hotel chain compromising five hundred million customer records; and one of the big-three credit reporting agencies exposing more than 143 million records, leading to a 25 percent loss in value and a $439 million hit. At the time, all of these companies had security mechanisms in place. They had trained professionals on the job. They had invested heavily in protection. But the reality is that no amount of investment in preventative technologies can fully eliminate the threat of savvy attackers, malicious insiders, or inadvertent victims of phishing. Breaches are rising, and so are their cost. In 2018, the average cost of a data breach rose 6.4 percent to $3.86 million, and the cost of a “mega breach,” those defined as losing 1 million to 50 million records, carried especially punishing price tags between $40 million and $350 million.2 Despite increasing investment in security
Tags : 
     VMware
By: F5 Networks Singapore Pte Ltd     Published Date: Sep 19, 2019
"Safeguarding the identity of users and managing the level of access they have to critical business applications could be the biggest security challenge organizations face in today’s assumed- breach world. Over 6,500 publicly disclosed data breaches occurred in 2018 alone, exposing over 5 billion records—a large majority of which included usernames and passwords.1 This wasn’t new to 2018 though, as evidenced by the existence of an online, searchable database of 8 billion username and password combinations that have been stolen over the years (https://haveibeenpwned.com/), keeping in mind there are only 4.3 billion people worldwide that have internet access. These credentials aren’t stolen just for fun—they are the leading attack type for causing a data breach. And the driving force behind the majority of credential attacks are bots—malicious ones—because they enable cybercriminals to achieve scale. That’s why prioritizing secure access and bot protection needs to be part of every organ
Tags : 
     F5 Networks Singapore Pte Ltd
By: HERE Technologies     Published Date: Sep 23, 2019
As ride-hailing, e-commerce and food delivery services continue to grow, the value of curbside space is increasingly apparent. Though valuable and potentially lucrative, its original design is failing to live up to the needs of today's society. The demand for curbside space is growing, but the asset itself is fixed. To reduce urban congestion, ensure optimal roadway performance management, encourage new mobility solutions and facilitate on-demand delivery efficiency, it is vital that the curb be better utilized. In this eBook, gain insight from six visionaries and thought leaders with a stake in the landscape of urban mobility as they share how to better optimize curbside space.
Tags : 
     HERE Technologies
By: KPMG     Published Date: Jun 06, 2019
HR’s most confident leaders are using data, predictive insights and AI to transform HR into a new value driver. Discover what it takes to become an HR transformation trailblazer. Read this report to discover: • how trailblazers are exploiting uncertainty to drive new competitive advantage • which technologies HR leaders are investing in • what it means to integrate human and digital labour in a collaborative workplace • six priorities for forward-looking HR leaders.
Tags : 
     KPMG
By: Cisco Umbrella EMEA     Published Date: Sep 02, 2019
Users are working off-hours, off-network, and off-VPN. Are you up on all the ways DNS can be used to secure them? If not, maybe it’s time to brush up. More than 91% of malware uses DNS to gain command and control, exfiltrate data, or redirect web traffic. Because DNS is a protocol used by all devices that connect to the internet, security at the DNS layer is critical for achieving the visibility and protection you need for any users accessing the internet. Learn how DNS-layer security can help you block threats before they reach your network or endpoints.
Tags : 
     Cisco Umbrella EMEA
By: Cisco Umbrella EMEA     Published Date: Sep 02, 2019
You are doing everything you can to avoid breaches. But what happens when a hacker manages to bypass your security? In this webinar we will show you how to build a strong security posture and a layered defence that will give you the ability to quickly respond to breaches. We will cover: - The evolving threat landscape and why prevention-only strategies eventually fail - How to build a strong first line of defence to reduce exposure to threats - Protect your last line of defence with retrospective security - A quick demo of how Cisco Umbrella and AMP for Endpoints work together to contain, detect and remediate threats in real time - An overview of how Incident Response Services can help you with the skills you need to manage a breach
Tags : 
     Cisco Umbrella EMEA
By: Cisco Umbrella EMEA     Published Date: Sep 02, 2019
"Cloud applications provide scale and cost benefits over legacy on-premises solutions. With more users going direct-to-internet from any device, the risk increases when users bypass security controls. We can help you reduce this risk across all of your cloud and on-premises applications with a zero-trust strategy that validates devices and domains, not just user credentials. See why thousands of customers rely on Duo and Cisco Umbrella to reduce the risks of data breaches and improve security. Don’t miss this best-practices discussion focused on the key role DNS and access control play in your zero-trust security strategy. Attendees will learn how to: ? Reduce the risk of phishing attacks and compromised credentials ? Improve speed-to-security across all your cloud applications ? Extend security on and off-network without sacrificing usability"
Tags : 
     Cisco Umbrella EMEA
By: TIBCO Software     Published Date: Jul 22, 2019
Connected Intelligence in Insurance Insurance as we know it is transforming dramatically, thanks to capabilities brought about by new technologies such as machine learning and artificial intelligence (AI). Download this IDC Analyst Infobrief to learn about how the new breed of insurers are becoming more personalized, more predictive, and more real-time than ever. What you will learn: The insurance industry's global digital trends, supported by data and analysis What capabilities will make the insurers of the future become disruptors in their industry Notable leaders based on IDC Financial Insights research and their respective use cases Essential guidance from IDC
Tags : 
     TIBCO Software
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com