database transaction

Results 1 - 25 of 34Sort Results By: Published Date | Title | Company Name
By: Silverpop     Published Date: Jun 15, 2012
If your contact base doesn't continue to grow, how can you expect your business to grow? Learn how you can help expand your company's marketing database with ideas for collecting data on-site, tapping social media, driving opt-ins via transactional emails and more.
Tags : marketing database, silverpop, marketing, marketing campaigns, reach, customers
     Silverpop
By: IBM APAC     Published Date: Apr 27, 2018
While relying on x86 servers and Oracle databases to support their stock trading systems, processing rapidly increasing number of transactions fast became a huge challenge for Wanlian Securities. They shifted to IBM FlashSystem that helped them cut average response time for their Oracle Databases from 10 to less than 0.4 milliseconds and improved CPU usage by 15%. Download this case study now.
Tags : 
     IBM APAC
By: IBM     Published Date: Oct 13, 2016
Compare IBM DB2 pureScale with any other offering being considered for implementing a clustered, scalable database configuration see how they deliver continuous availability and why they are important. Download now!
Tags : data. queries, database operations, transactional databases, clustering, it management, storage
     IBM
By: SAP     Published Date: Jul 17, 2012
Relational database management systems (RDBMSs) are systems of software that manage databases as structured sets of tables containing rows and columns with references to one another through key values. They include the ability to optimize storage, process transactions, perform queries, and preserve the integrity of data structures. When used with applications, they provide the beating heart of the collection of business functions supported by those applications. They vary considerably in terms of the factors that impact the total cost of running a database application, yet users seldom perform a disciplined procedure to calculate such costs. Most users choose instead to remain with a single vendor's RDBMS and never visit the question of ongoing hardware, software, and staffing fees.
Tags : sap, infrastructure, database, data management, white paper, management, storage, business functions
     SAP
By: Oracle ZDLRA     Published Date: Jan 10, 2018
Traditional backup systems fail to meet the database protection and recovery requirements of modern organizations. These systems require ever-growing backup windows, negatively impact performance in mission-critical production databases, and deliver recovery time objectives (RTO) and recovery point objectives (RPO) measured in hours or even days, failing to meet the requirements of high-volume, high transactional databases -- potentially costing millions in lost productivity and revenue, regulatory penalties, and reputation damage due to an outage or data loss.
Tags : data protection, backup speed, recovery, overhead, assurance, storage, efficiency, oracle
     Oracle ZDLRA
By: Oracle     Published Date: May 03, 2017
Traditional backup systems fail to meet the needs of modern organisations by focusing on backup, not recovery. They treat databases as generic files to be copied, rather than as transactional workloads with specific data integrity, consistency, performance, and availability requirements.
Tags : 
     Oracle
By: Oracle     Published Date: Oct 20, 2017
Traditional backup systems fail to meet the needs of modern organizations by focusing on backup, not recovery. They treat databases as generic files to be copied, rather than as transactional workloads with specific data integrity, consistency, performance, and availability requirements. Additionally, highly regulated industries, such as financial services, are subject to ever?increasing regulatory mandates that require stringent protection against data breaches, data loss, malware, ransomware, and other risks. These risks require fiduciary?class data recovery to eliminate data loss exposure and ensure data integrity and compliance
Tags : 
     Oracle
By: Oracle     Published Date: Aug 02, 2018
T raditional backup systems fail to meet the needs of modern organizations by focusing on backup, not recovery. They treat databases as generic files to be copied, rather than as transactional workloads with specific data integrity, consistency, performance, and availability requirements. Additionally, highly regulated industries, such as financial services, are subject to ever?increasing regulatory mandates that require stringent protection against data breaches, data loss, malware, ransomware, and other risks. These risks require fiduciary?class data recovery to eliminate data loss exposure and ensure data integrity and compliance. This book explains modern database protection and recovery challenges (Chapter 1), the important aspects of a database protection and recovery solution (Chapter 2), Oracle’s database protection and recovery solutions (Chapter 3), and key reasons to choose Oracle for your database protection and recovery needs (Chapter 4).
Tags : 
     Oracle
By: IBM     Published Date: Jun 29, 2018
IBM LinuxONE™ is an enterprise Linux server engineered to deliver cloud services that are secure, fast and instantly scalable. The newest member of the family, IBM LinuxONE Emperor™ II, is designed for businesses where the following may be required: • protecting sensitive transactions and minimizing business risk • accelerating the movement of data, even with the largest databases • growing users and transactions instantly while maintaining operational excellence • accessing an open platform that speeds innovation
Tags : 
     IBM
By: Group M_IBM Q418     Published Date: Sep 10, 2018
IBM LinuxONE™ is an enterprise Linux server engineered to deliver cloud services that are secure, fast and instantly scalable. The newest member of the family, IBM LinuxONE Emperor™ II, is designed for businesses where the following may be required: • protecting sensitive transactions and minimizing business risk • accelerating the movement of data, even with the largest databases • growing users and transactions instantly while maintaining operational excellence • accessing an open platform that speeds innovation
Tags : 
     Group M_IBM Q418
By: Group M_IBM Q418     Published Date: Dec 18, 2018
IBM LinuxONE™ is an enterprise Linux server engineered to deliver cloud services that are secure, fast and instantly scalable. The newest member of the family, IBM LinuxONE Emperor™ II, is designed for businesses where the following may be required: • protecting sensitive transactions and minimizing business risk • accelerating the movement of data, even with the largest databases • growing users and transactions instantly while maintaining operational excellence • accessing an open pla
Tags : 
     Group M_IBM Q418
By: Group M_IBM Q119     Published Date: Dec 18, 2018
IBM LinuxONE™ is an enterprise Linux server engineered to deliver cloud services that are secure, fast and instantly scalable. The newest member of the family, IBM LinuxONE Emperor™ II, is designed for businesses where the following may be required: • protecting sensitive transactions and minimizing business risk • accelerating the movement of data, even with the largest databases • growing users and transactions instantly while maintaining operational excellence • accessing an open platform that speeds innovation
Tags : 
     Group M_IBM Q119
By: NetApp     Published Date: Dec 15, 2014
Organizations of all kinds rely on their relational databases for both transaction processing and analytics, but many still have challenges in meeting their goals of high availability, security, and performance. Whether planning for a major upgrade of existing databases or considering a net new project, enterprise solution architects should realize that the storage capabilities will matter. NetApp’s systems, software, and services offer a number of advantages as a foundation for better operational results.
Tags : database, transaction processing, analytics, enterprise solution architects, storage capabilities, storage
     NetApp
By: Riverbed     Published Date: May 18, 2012
As your business needs become more dynamic, monitoring systems will be key to understanding how applications fulfill business needs and your ability to provide governance will determine the success level of your entire enterprise. With complexity growing at such a hyperbolic rate, in the future, automated application discovery and dependency mapping will not just be useful -it will be mandatory.
Tags : service management, management software, infrastructure, devops, cloud performance monitoring, application monitoring, database monitoring, end user experience monitoring, transaction monitoring, data management
     Riverbed
By: IBM     Published Date: Jul 21, 2016
IBM's recently released DB2 version 11.1 for Linux, Unix and Windows (LUW) is a hybrid database that IBM says can handle transactional and analytic workloads thanks to its BLU Acceleration technology, which features an in-memory column store for analytical workloads that can scale across a massively parallel cluster.
Tags : ibm, db2. analytics, mpp, data wharehousing
     IBM
By: IBM     Published Date: Jul 06, 2017
DB2 is a proven database for handling the most demanding transactional workloads. But the trend as of late is to enable relational databases to handle analytic queries more efficiently by adding an inmemory column store alongside to aggregate data and provide faster results. IBM's BLU Acceleration technology does exactly that. While BLU isn't brand new, the ability to spread the column store across a massively parallel processing (MPP) cluster of up to 1,000 nodes is a new addition to the technology. That, along with simpler monthly pricing options and integration with dashDB data warehousing in the cloud, makes DB2 for LUW, a very versatile database.
Tags : memory analytics, database, efficiency, acceleration technology, aggregate data
     IBM
By: Hewlett Packard Enterprise     Published Date: Aug 02, 2017
In midsize and large organizations, critical business processing continues to depend on relational databases including Microsoft® SQL Server. While new tools like Hadoop help businesses analyze oceans of Big Data, conventional relational-database management systems (RDBMS) remain the backbone for online transaction processing (OLTP), online analytic processing (OLAP), and mixed OLTP/OLAP workloads.
Tags : database usage, database management, server usage, data protection
     Hewlett Packard Enterprise
By: Network Automation     Published Date: Dec 10, 2008
The challenge of creating a robust errors database, the core of iCheck, is that information must be pulled from a multitude of disparate sources. Sometimes, the data source is a database with its unique schema and field names that must be mapped to the iCheck database. Other times, the data source is a text file which must be accessed via FTP, parsed, and loaded into iCheck. And other times, the data is in RETS (Real Estate Transaction Standard) format, a real estate data format similar to XML, which is accessed via HTTP.
Tags : network automation, data sources, data accuracy, database errors, networking, data management
     Network Automation
By: Dell PC Lifecycle     Published Date: Mar 09, 2018
In the end, the Dell EMC VMAX 250F with Intel® Xeon® Processor All Flash storage array lived up to its promises better than the HPE 3PAR 8450 Storage array did. We experience minimal impact to database performance when the VMAX 250F processed transactional and data mart loading at the same time. This is useful whether you're performing extensive backups or compiling large amounts of data from multiple sources. Intel Inside®. New Possibilities Outside.
Tags : 
     Dell PC Lifecycle
By: Oracle     Published Date: Mar 22, 2018
s your information technology (IT) organization pressured to get more work done with fewer people or on a constricted budget? Do you need to make IT a competitive asset rather than a cost center? Does your business struggle with slow software applications or data that's too often unavailable? If you answered "yes" to any of these questions, it's time to take a close look at Oracle Exadata, the world's fastest database machine exclusively designed to run Oracle Database. It is the first database machine optimized for data warehousing, online transaction processing (OLTP), and database consolidation workloads as well as in-memory databases and database as a service (DBaaS).
Tags : 
     Oracle
By: Oracle     Published Date: Oct 20, 2017
With the growing size and importance of information stored in today’s databases, accessing and using the right information at the right time has become increasingly critical. Real-time access and analysis of operational data is key to making faster and better business decisions, providing enterprises with unique competitive advantages. Running analytics on operational data has been difficult because operational data is stored in row format, which is best for online transaction processing (OLTP) databases, while storing data in column format is much better for analytics processing. Therefore, companies normally have both an operational database with data in row format and a separate data warehouse with data in column format, which leads to reliance on “stale data” for business decisions. With Oracle’s Database In-Memory and Oracle servers based on the SPARC S7 and SPARC M7 processors companies can now store data in memory in both row and data formats, and run analytics on their operatio
Tags : 
     Oracle
By: Oracle CX     Published Date: Oct 20, 2017
With the growing size and importance of information stored in today’s databases, accessing and using the right information at the right time has become increasingly critical. Real-time access and analysis of operational data is key to making faster and better business decisions, providing enterprises with unique competitive advantages. Running analytics on operational data has been difficult because operational data is stored in row format, which is best for online transaction processing (OLTP) databases, while storing data in column format is much better for analytics processing. Therefore, companies normally have both an operational database with data in row format and a separate data warehouse with data in column format, which leads to reliance on “stale data” for business decisions. With Oracle’s Database In-Memory and Oracle servers based on the SPARC S7 and SPARC M7 processors companies can now store data in memory in both row and data formats, and run analytics on their operatio
Tags : 
     Oracle CX
By: Oracle     Published Date: Oct 20, 2017
Databases have long served as the lifeline of the business. Therefore, it is no surprise that performance has always been top of mind. Whether it be a traditional row-formatted database to handle millions of transactions a day or a columnar database for advanced analytics to help uncover deep insights about the business, the goal is to service all requests as quickly as possible. This is especially true as organizations look to gain an edge on their competition by analyzing data from their transactional (OLTP) database to make more informed business decisions. The traditional model (see Figure 1) for doing this leverages two separate sets of resources, with an ETL being required to transfer the data from the OLTP database to a data warehouse for analysis. Two obvious problems exist with this implementation. First, I/O bottlenecks can quickly arise because the databases reside on disk and second, analysis is constantly being done on stale data. In-memory databases have helped address p
Tags : 
     Oracle
By: Oracle CX     Published Date: Oct 20, 2017
Databases have long served as the lifeline of the business. Therefore, it is no surprise that performance has always been top of mind. Whether it be a traditional row-formatted database to handle millions of transactions a day or a columnar database for advanced analytics to help uncover deep insights about the business, the goal is to service all requests as quickly as possible. This is especially true as organizations look to gain an edge on their competition by analyzing data from their transactional (OLTP) database to make more informed business decisions. The traditional model (see Figure 1) for doing this leverages two separate sets of resources, with an ETL being required to transfer the data from the OLTP database to a data warehouse for analysis. Two obvious problems exist with this implementation. First, I/O bottlenecks can quickly arise because the databases reside on disk and second, analysis is constantly being done on stale data. In-memory databases have helped address p
Tags : 
     Oracle CX
Previous   1 2    Next    
Search White Papers      

Add White Papers

Get your white papers featured in the insideHPC White Paper Library contact: Kevin@insideHPC.com