big

Results 1 - 25 of 2205Sort Results By: Published Date | Title | Company Name
By: Seagate     Published Date: Jan 27, 2015
This paper is the first to explore a recent breakthrough with the introduction of the High Performance Computing (HPC) industry’s first Intelligence Community Directive (ICD) 503 (DCID 6/3 PL4) certified compliant and secure scale-out parallel file system solution, Seagate ClusterStor™ Secure Data Appliance, which is designed to address government and business enterprise need for collaborative and secure information sharing within a Multi-Level Security (MLS) framework at Big Data and HPC Scale.
Tags : 
     Seagate
By: Intel     Published Date: Aug 06, 2014
Around the world and across all industries, high-performance computing is being used to solve today’s most important and demanding problems. More than ever, storage solutions that deliver high sustained throughput are vital for powering HPC and Big Data workloads.
Tags : intel, enterprise edition lustre software
     Intel
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflop of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Enterprise Edition for Lustre* software and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters.
Tags : 
     Intel
By: IBM     Published Date: Jun 05, 2014
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Sep 16, 2015
6 criteria for evaluating a high-performance cloud services providers Engineering, scientific, analytics, big data and research workloads place extraordinary demands on technical and high-performance computing (HPC) infrastructure. Supporting these workloads can be especially challenging for organizations that have unpredictable spikes in resource demand, or need access to additional compute or storage resources for a project or to support a growing business. Software Defined Infrastructure (SDI) enables organizations to deliver HPC services in the most efficient way possible, optimizing resource utilization to accelerate time to results and reduce costs. SDI is the foundation for a fully integrated environment, optimizing compute, storage and networking infrastructure to quickly adapt to changing business requirements, and dynamically managing workloads and data, transforming a s
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
According to our global study of more than 800 cloud decision makers and users are becoming increasingly focused on the business value cloud provides. Cloud is integral to mobile, social and analytics initiatives – and the big data management challenge that often comes with them and it helps power the entire suite of game-changing technologies. Enterprises can aim higher when these deployments are riding on the cloud. Mobile, analytics and social implementations can be bigger, bolder and drive greater impact when backed by scalable infrastructure. In addition to scale, cloud can provide integration, gluing the individual technologies into more cohesive solutions. Learn how companies are gaining a competitive advanatge with cloud computing.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
The latest generation of highly scalable HPC clusters is a game changer for design optimization challenges. HPC clusters, built on a modular, multi-core x86 architecture, provide a cost effective and accessible platform on which to conduct realistic simulation compared with the “big iron” HPC systems of the past or with the latest workstation models. This paper provides 6 steps to making clusters a reality for any business.
Tags : 
     IBM
By: Bull     Published Date: Dec 04, 2014
Bull, an Atos company, is a leader in Big Data, HPC and cyber-security with a worldwide market presence. Bull has extensive experience in implementing and running petaflopsscale supercomputers. The exascale program is a new step forward in Bull’s strategy to deliver exascale supercomputers capable of addressing the new challenges of science, industry and society.
Tags : bull, exascale, big data, hpc, cyber security, supercomputers
     Bull
By: Adaptive Computing     Published Date: Feb 21, 2014
Big data applications represent a fast-growing category of high-value applications that are increasingly employed by business and technical computing users. However, they have exposed an inconvenient dichotomy in the way resources are utilized in data centers. Conventional enterprise and web-based applications can be executed efficiently in virtualized server environments, where resource management and scheduling is generally confined to a single server. By contrast, data-intensive analytics and technical simulations demand large aggregated resources, necessitating intelligent scheduling and resource management that spans a computer cluster, cloud, or entire data center. Although these tools exist in isolation, they are not available in a general-purpose framework that allows them to inter operate easily and automatically within existing IT infrastructure.
Tags : 
     Adaptive Computing
By: Intel     Published Date: Sep 16, 2014
In this Guide, we take a look at what an HPC solution like Lustre can deliver for a broad community of business and commercial organizations struggling with the challenge of big data and demanding storage growth.
Tags : intel, lustre*, solution for business
     Intel
By: IBM     Published Date: Nov 14, 2014
The latest generation of highly scalable HPC clusters is a game changer for design optimization challenges. HPC clusters, built on a modular, multi-core x86 architecture, provide a cost effective and accessible platform on which to conduct realistic simulation compared with the “big iron” HPC systems of the past or with the latest workstation models. This paper provides 6 steps to making clusters a reality for any business.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
Swift Engineering wanted a solution where they spent more time solving complex problems than administering the system. Cray’s CX 1000 combined with Platform HPC allowed Swift to solve bigger problems with more enhanced graphics with real time speed.
Tags : 
     IBM
By: Dell and Intel®     Published Date: Nov 18, 2015
Unleash the extreme performance and scalability of the Lustre® parallel file system for high performance computing (HPC) workloads, including technical ‘big data’ applications common within today’s enterprises. The Dell Storage for HPC with Intel® Enterprise Edition (EE) for Lustre Solution allows end-users that need the benefits of large–scale, high bandwidth storage to tap the power and scalability of Lustre, with its simplified installation, configuration, and management features that are backed by Dell and Intel®.
Tags : 
     Dell and Intel®
By: General Atomics     Published Date: Jan 13, 2015
The term “Big Data” has become virtually synonymous with “schema on read” (where data is applied to a plan or schema as it is ingested or pulled out of a stored location) unstructured data analysis and handling techniques like Hadoop. These “schema on read” techniques have been most famously exploited on relatively ephemeral human-readable data like retail trends, twitter sentiment, social network mining, log files, etc. But what if you have unstructured data that, on its own, is hugely valuable, enduring, and created at great expense? Data that may not immediately be human readable or indexable on search? Exactly the kind of data most commonly created and analyzed in science and HPC. Research institutions are awash with such data from large-scale experiments and extreme-scale computing that is used for high-consequence
Tags : general atomics, big data, metadata, nirvana
     General Atomics
By: Avere Systems     Published Date: Jun 27, 2016
This white paper reviews common HPC-environment challenges and outlines solutions that can help IT professionals deliver best-in-class HPC cloud solutions—without undue stress and organizational chaos. The paper: • Identifies current issues—including data management, data center limitations, user expectations, and technology shifts- that stress IT teams and existing infrastructure across industries and HPC applications • Describes the potential cost savings, operational scale, and new functionality that cloud solutions can bring to big compute • Characterizes technical and other barriers to an all cloud infrastructure and describes how IT teams can leverage a hybrid cloud for compute power, maximum flexibility, and protection against locked-in scenarios
Tags : 
     Avere Systems
By: VMware     Published Date: Sep 12, 2019
IT process automation can seem like a big step, but there are incremental ways to achieve improved efficiency that reduce risk and drive better business outcomes. The proper approach to automation can turbocharge your administrative teams, open new doors and provide the agility to respond to any business need. It’s the key to unlocking the power of cloud, and new service offerings make it more attractive to those looking to build the right IT business strategy for the future.
Tags : 
     VMware
By: VMware     Published Date: Sep 12, 2019
You’ve heard the stories: a large Internet company exposing all three billion of its customer accounts; a major hotel chain compromising five hundred million customer records; and one of the big-three credit reporting agencies exposing more than 143 million records, leading to a 25 percent loss in value and a $439 million hit. At the time, all of these companies had security mechanisms in place. They had trained professionals on the job. They had invested heavily in protection. But the reality is that no amount of investment in preventative technologies can fully eliminate the threat of savvy attackers, malicious insiders, or inadvertent victims of phishing. Breaches are rising, and so are their cost. In 2018, the average cost of a data breach rose 6.4 percent to $3.86 million, and the cost of a “mega breach,” those defined as losing 1 million to 50 million records, carried especially punishing price tags between $40 million and $350 million.2 Despite increasing investment in security
Tags : 
     VMware
By: F5 Networks Singapore Pte Ltd     Published Date: Sep 09, 2019
Safeguarding the identity of users and managing the level of access they have to critical business applications could be the biggest security challenge organizations face in today’s assumed breach world.
Tags : 
     F5 Networks Singapore Pte Ltd
By: F5 Networks Singapore Pte Ltd     Published Date: Sep 19, 2019
"Safeguarding the identity of users and managing the level of access they have to critical business applications could be the biggest security challenge organizations face in today’s assumed- breach world. Over 6,500 publicly disclosed data breaches occurred in 2018 alone, exposing over 5 billion records—a large majority of which included usernames and passwords.1 This wasn’t new to 2018 though, as evidenced by the existence of an online, searchable database of 8 billion username and password combinations that have been stolen over the years (https://haveibeenpwned.com/), keeping in mind there are only 4.3 billion people worldwide that have internet access. These credentials aren’t stolen just for fun—they are the leading attack type for causing a data breach. And the driving force behind the majority of credential attacks are bots—malicious ones—because they enable cybercriminals to achieve scale. That’s why prioritizing secure access and bot protection needs to be part of every organ
Tags : 
     F5 Networks Singapore Pte Ltd
By: KPMG     Published Date: Jun 06, 2019
AI will transform the workplace in ways we’re still trying to imagine. What skills and capabilities will your organisation need to survive? Read this report to find out – with contributions from government, academics and the Big Innovation Centre. Download the report to discover: • how AI will change the way economies, societies and businesses operate • how AI will change the skills your workforce needs in the 21st century • what AI means for the way we learn • how AI will change the role of the HR function.
Tags : 
     KPMG
By: KPMG     Published Date: Jun 06, 2019
What impact will the cloud-enabled workplace have on your cybersecurity strategy? This year’s research shows that organisations are navigating a myriad of both old and new cybersecurity challenges to bring the cloud into scope. Read this to discover: • how growing cloud dependency has created distinctive challenges around cyber security • what the biggest cyber challenges are for organisations in this context • how intelligent automation and machine learning is being used to overcome operational obstacles hampering cloud security • a set of cybersecurity considerations for modern IT environments.
Tags : 
     KPMG
By: KPMG     Published Date: Jun 10, 2019
Getting complex decisions right across complicated operational networks is the key to optimum performance. Find out how one of the UK’s biggest bus operators is using data and analytics to make better decisions and optimise the use of resources across their network. Read this story to discover: • how data and analytics can transform operational performance • the benefits of using decision-support tools in the middle office • key lessons for getting your plans for digital transformation right.
Tags : 
     KPMG
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com