technology

Results 1 - 25 of 4948Sort Results By: Published Date | Title | Company Name
By: Seagate     Published Date: Jan 26, 2016
Finding oil and gas has always been a tricky proposition, given that reserves are primarily hidden underground, and often as not, under the ocean as well. The costs involved in acquiring rights to a site, drilling the wells, and operating them are considerable and has driven the industry to adopt advanced technologies for locating the most promising sites. As a consequence, oil and gas exploration today is essentially an exercise in scientific visualization and modeling, employing some of most advanced computational technologies available. High performance computing (HPC) systems are being used to fill these needs, primarily with x86-based cluster computers and Lustre storage systems. The technology is well developed, but the scale of the problem demands medium to large-sized systems, requiring a significant capital outlay and operating expense. The most powerful systems deployed by oil and gas companies are represented by petaflop-scale computers with multiple petabytes of attached
Tags : 
     Seagate
By: Intel     Published Date: Aug 06, 2014
Powering Big Data Workloads with Intel® Enterprise Edition for Lustre* software The Intel® portfolio for high-performance computing provides the following technology solutions: • Compute - The Intel® Xeon processor E7 family provides a leap forward for every discipline that depends on HPC, with industry-leading performance and improved performance per watt. Add Intel® Xeon Phi coprocessors to your clusters and workstations to increase performance for highly parallel applications and code segments. Each coprocessor can add over a teraflop of performance and is compatible with software written for the Intel® Xeon processor E7 family. You don’t need to rewrite code or master new development tools. • Storage - High performance, highly scalable storage solutions with Intel® Enterprise Edition for Lustre* software and Intel® Xeon Processor E7 based storage systems for centralized storage. Reliable and responsive local storage with Intel® Solid State Drives. • Networking - Intel® True Scale Fabric and Networking technologies – Built for HPC to deliver fast message rates and low latency. • Software and Tools: A broad range of software and tools to optimize and parallelize your software and clusters.
Tags : 
     Intel
By: Altair     Published Date: Feb 12, 2014
A Cray-Altair Solution for Optimized External Aerodynamics Cray and Altair are leaders in providing the powerful, usable technology engineers need to perform external aerodynamics analysis with greater speed and accuracy. With Altair’s HyperWorks Virtual Wind Tunnel running on Cray XC30 or CS300 systems, manufacturers of all sizes can now predict a vehicle’s external aerodynamic performance and improve the cooling, comfort, visibility and stability features in their designs -- without the need for numerous physical wind tunnel tests.
Tags : cray-altair, optimized external aerodynamics, altair hyperworks
     Altair
By: IBM     Published Date: Jun 05, 2014
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Jun 05, 2014
In an audited benchmark conducted by STAC®, the Securities Technology Analysis Center, InfoSphere BigInsights for Hadoop was found to deliver an approximate 4x performance gain on average over open source Hadoop running jobs derived from production workload traces. The result is consistent with an approximate eleven times advantage in raw scheduling performance provided by Adaptive MapReduce – a new InfoSphere BigInsights for Hadoop feature that leverages high-performance computing technology from IBM Platform Computing.
Tags : ibm
     IBM
By: IBM     Published Date: Sep 02, 2014
With tougher regulations and continuing market volatility, financial firms are moving to active risk management with a focus on counterparty risk. Firms are revamping their risk and trading practices from top to bottom. They are adopting new risk models and frameworks that support a holistic view of risk. Banks recognize that technology is critical for this transformation, and are adding state-of-the-art enterprise risk management solutions, high performance data and grid management software, and fast hardware. Join IBM Algorithmics and IBM Platform Computing to gain insights on this trend and on technologies for enabling active "real-time" risk management.
Tags : 
     IBM
By: IBM     Published Date: Sep 16, 2015
Docker is a lightweight Linux container technology built on top of LXC (LinuX Containers) and cgroup (control groups), which offers many attractive benefits for HPC environments. Find out more about how IBM Platform LSF® and Docker have been integrated outside the core of Platform LSF with a real world example involving the application BWA (bio-bwa.sourceforge.net). This step-by-step white paper provides details on how to get started with the IBM Platform LSF and Docker integration which is available via open beta on Service Management Connect.
Tags : 
     IBM
By: IBM     Published Date: May 20, 2015
Whether in high-performance computing, Big Data or analytics, information technology has become an essential tool in today’s hyper-competitive business landscape. Organizations are increasingly being challenged to do more with less and this is fundamentally impacting the way that IT infrastructure is deployed and managed. In this short e-book, learn the top ten ways that IBM Platform Computing customers are using technologies like IBM Platform LSF and IBM Platform Symphony to help obtain results faster, share resources more efficiently, and improve the overall cost-effectiveness of their global IT infrastructure.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
A*Star had high levels of user discontent and not enough computational resources for the population of users or the number of research projects. Platform LSF acted as the single unifying workload scheduler and helped rapidly increase resource utilization.
Tags : 
     IBM
By: IBM     Published Date: Feb 13, 2015
The new Clusters for Dummies, e-book from IBM Platform Computing explains how clustering technology enables you to run higher quality simulations and shorten the time to discoveries. In this e-book, you’ll discover how to: Make a cluster work for your business Create clusters using commodity components Use workload management software for reliable results Use cluster management software for simplified administration Learn from case studies of clusters in action With clustering technology you can increase your compute capacity, accelerate innovation process, shrink time to insights, and improve your productivity, all of which will lead to increased competitiveness for your business.
Tags : 
     IBM
By: Panasas     Published Date: Oct 02, 2014
HPC and technical computing environments require the collection, storage,and transmission of large-scale datasets. To meet these demands, datacenter architects must consider how increasing storage capacity over time will affect HPC workloads, performance, and system availability. While many enterprises have looked to scale-up NAS to meet their storage needs, this approach can lead to data islands that make it difficult to share data. Distributed, scale-out storage was developed to get around the technology limitations of scale-up NAS architectures.
Tags : 
     Panasas
By: Dell and Intel®     Published Date: Nov 18, 2015
The NCSA Private Sector Program creates a high-performance computing cluster to help corporations overcome critical challenges. Through its Private Sector Program (PSP), NCSA has provided supercomputing, consulting, research, prototyping and development, and production services to more than one-third of the Fortune 50, in manufacturing, oil and gas, finance, retail/wholesale, bio/medical, life sciences, technology and other sectors. “We’re not the typical university supercomputer center, and PSP isn’t a typical group,” Giles says. “Our focus is on helping companies leverage highperformance computing in ways that make them more competitive.”
Tags : 
     Dell and Intel®
By: AMD     Published Date: Nov 09, 2015
Graphics Processing Units (GPUs) have become a compelling technology for High Performance Computing (HPC), delivering exceptional performance per watt and impressive densities for data centers. AMD has partnered up with Hewlett Packard Enterprise to offer compelling solutions to drive your HPC workloads to new levels of performance. Learn about the awe-inspiring performance and energy efficiency of the AMD FirePro™ S9150, found in multiple HPE servers including the popular 2U HPE ProLiant DL380 Gen9 server. See why open standards matter for HPC, and what AMD is doing in this area. Click here to read more on AMD FirePro™ server GPUs for HPE Proliant servers
Tags : 
     AMD
By: Avere Systems     Published Date: Jun 27, 2016
This white paper reviews common HPC-environment challenges and outlines solutions that can help IT professionals deliver best-in-class HPC cloud solutions—without undue stress and organizational chaos. The paper: • Identifies current issues—including data management, data center limitations, user expectations, and technology shifts- that stress IT teams and existing infrastructure across industries and HPC applications • Describes the potential cost savings, operational scale, and new functionality that cloud solutions can bring to big compute • Characterizes technical and other barriers to an all cloud infrastructure and describes how IT teams can leverage a hybrid cloud for compute power, maximum flexibility, and protection against locked-in scenarios
Tags : 
     Avere Systems
By: HPE     Published Date: Jul 21, 2016
Deep learning is a method of creating artificial intelligence systems that combine computer-based multi-layer neural networks with intensive training techniques and large data sets to enable analysis and predictive decision making. A fundamental aspect of deep learning environments is that they transcend finite programmable constraints to the realm of extensible and trainable systems. Recent developments in technology and algorithms have enabled deep learning systems to not only equal but to exceed human capabilities in the pace of processing vast amounts of information
Tags : 
     HPE
By: Oracle     Published Date: Nov 29, 2017
Download this webcast to learn more about the current state of technology today, challenges of the cloud, how the move to the cloud changes the DBA role, and realities of the move to the cloud, and more!
Tags : 
     Oracle
By: Department for International Trade DIT     Published Date: Jan 31, 2018
White Paper Published By: The UK’s Department for International Trade. The Department for International Trade helps overseas companies locate and grow in the UK.
Tags : technology, department, international, trade
     Department for International Trade DIT
By: Rohde & Schwarz Cybersecurity     Published Date: Nov 28, 2017
DPI software is made to inspect packets at high wire speeds and a critical factor is the throughput and resources required. Keeping the amount of resources that integrated DPI and application classification technology requires low is critical. The fewer cores (on a multi-core processor) and the less on-board memory an engine needs, the better. Multi-threading provides almost linear scalability on multi-core systems. In addition, highly-optimized flow tracking is required for handling millions of concurrent subscribers.
Tags : detection, rate, performance, efficiency, accuracy, encrypted apps, integration, metadata
     Rohde & Schwarz Cybersecurity
By: Rohde & Schwarz Cybersecurity     Published Date: Nov 28, 2017
According to many market research analysts, the global wireless access point (WAP) market is anticipated to continue its upward trajectory and to grow at an impressive compound annual growth rate (CAGR) of approximately 8% through 2020. Many enterprises are utilizing cloudcomputing technology for cost-cutting purposes, eliminating investments required for storage hardware and other physical infrastructures. With significant growth expected in Internet usage, particularly bandwidth consuming video traffic, WAP vendors need to enable their customers to monitor and improve device performance, improve end user experience, and enhance security. These customers include general enterprises that offer Internet access to patrons like airports, hotels, retail / shopping centers and so on. These external Internet access providers can differentiate themselves by offering optimum service through advanced network analytics, traffic shaping, application control, security capabilities and more.
Tags : utilization, challenges, dpi, benefits, airport, public, wifi, qoe
     Rohde & Schwarz Cybersecurity
By: Microsoft     Published Date: Oct 12, 2017
Customers expect more from brands when it comes to convenience, resolution times, and agent expertise. Evolving customer preferences are tightly linked to innovations in digital technology, and brands must embrace both in order to keep pace with heightened expectations. The good news is that brands that can deliver on expectations are rewarded with higher rates of customer retention and loyalty. The Microsoft 2017 State of Global Customer Service survey polled 5,000 people from Brazil, Germany, Japan, the United Kingdom and the United States. We continue to find commonalities along with distinct differences between locals. And though people in all age groups are embracing new digital trends, millennials especially are shaping the way brands need to think about the future of customer service engagement.
Tags : customer service, customer loyalty, digital trends, multiple channels, microsoft
     Microsoft
By: Oracle ZDLRA     Published Date: Jan 10, 2018
Business leaders expect two things from IT: keep mission-critical applications available and high performing 24x7 and, if something does happen, recover to be back in business quickly and without losing any critical data so there is no impact on revenue stream. Of course, there is a gap between this de facto expectation from nontechnical business leaders and what current technology is actually capable of delivering. For mission-critical workloads, which are most often hosted on databases, organizations may choose to implement high availability (HA) technologies within the database to avoid downtime and data loss.
Tags : recovery point, recovery time, backup appliance, san/nas, service level agreement, oracle
     Oracle ZDLRA
By: Oracle Dyn     Published Date: Dec 06, 2017
Secondary DNS (sometimes referred to as multi-DNS) operates in an “always on” manner to complement your existing infrastructure as an additional authoritative DNS service. When an end user’s recursive server initiates a DNS request, both the “primary” DNS service and the “secondary” DNS will respond as soon as they receive the request. The response that reaches the recursive server first will be passed back to the end user, completing their request.
Tags : dns, service, server, infrastructure, technology, business, optimization
     Oracle Dyn
By: Mimecast     Published Date: Nov 28, 2017
With the pending EU General Data Protection Regulation (GDPR), your organization must consider a wide variety of changes for compliance if you hold EU resident data. Your organization should look at GDPR as an opportunity to modernize storage, compliance and security needs. But what services should be considered? Download to learn more including: • How the right providers can help you build a business case for GDPR compliance • Ways providers can directly aid in the compliance process • Why the right tools can help with not just technology but process changes as well
Tags : software provider, cloud service provider, gdpr
     Mimecast
By: Carbonite     Published Date: Jan 04, 2018
For a backup solution to be considered flexible, it needs to satisfy several key business requirements. It should integrate seamlessly with any servers you’re running and provide full support for all the applications your business uses. It should enable you to protect assets in different parts of the country or overseas. And it should let you manage and monitor backups from anywhere. A flexible backup solution gives you everything you need to protect the technology investments you make now and in the future. So instead of having to buy multiple solutions to support your changing needs, you can have a single solution that adapts to fit your environment. We call that flexible deployment.
Tags : 
     Carbonite
Start   Previous   1 2 3 4 5 6 7 8 9 10 11 12 13 14 15    Next    End
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com