There is rarely a dull moment for IBM i administrators, as their operating environments continuously absorb and distribute new data. While this trend certainly keeps teams engaged, it also triggers fears that current protocols may not be enough to meet the challenges of tomorrow. First and foremost, there is often some doubt as to whether storage plans will be able to scale to manage the sheer influx of information.
This trend has been driven primarily by the proliferation of endpoints as companies look to mobilize their workforces, and empowered employees bring new hardware into the office. However, there has been a parallel diversification in the type of data being trafficked. Companies are now sifting through a wider variety of internal and external sources in pursuit of a business intelligence edge, and some are even starting to see machine-to-machine communication take off in earnest across their IT ecosystems.
Table partitioning, an organizational approach by which table data is divided across multiple storage objects, appears to be an increasingly attractive answer to these initial anxieties. However, without properly aligned tools in place, administrators could find themselves generating far more queries, and risking incomplete intelligence as they struggle to restore the information back into its original formatting.
The Partitioning Paradox
As is often the case in evolving IBM i environments, administrators are faced with the problem of scope. In the era of data-driven decision-making and heightened consciousness for compliance concerns, IT teams across industries have received executive mandates to gather more raw records and hold onto them for longer periods of time. Not surprisingly, storage capacity plans within data centers are already feeling the strain.
Table (or range) partitioning provides a logical antidote by providing data management professionals with the option to separate rapidly expanding files into multiple components spread them across several separate components. IBM i users also have the freedom to organize the distributed data however they see fit.
The most popular defining characteristics are usually date and time. For example, a table partition may be created to divide all customer transactions and history by date, divided by customer numbers. Alternatively, a partition could separate employees according to their corporate identification number or first letter of their last name.
Whatever the details ultimately are, partitioning allows administrators to welcome vastly more data into a single environment, and more easily support continuous updating and migration. In fact, as many as 32,767 data partitions can be created if IBM i users were feeling so bold. These segments can always be independently added or attached to, and multiple partition ranges can be stored in a single table space. And, there is always backwards compatibility with non-partitioned indexes as well.
As these partitions proliferate to help IBM i users stay under storage limits, complexity can quickly become an issue when orchestrating and overseeing all of these micro assets. With Query/400, the IBM i query standard, users are only afforded access to one partition member at a time. As a result, end users could be put in a position where, if three partition members are sitting on the same system, three or four queries will be required to retrieve them and consolidate the data all using CQE (Classic Query Engine). And of course, that needed number is rarely restricted to just three.
Consequently, administrators face the prospect of significantly longer query processing time or incomplete and possibly inaccurate business intelligence. With companies unable to bear such outcomes for prolonged periods in today's operating environments, the search must begin for more targeted and flexible solutions.
The SEQUEL Solution
Although moving to a multiple partition file system can help companies proactively address storage scalability demands, the primary fear remains that data access will slow down or get too complicated. What sets SEQUEL apart from legacy IBM i data analysis solutions is the advantage of aggregation. By using SEQUEL with a database of *LOCALSYS, SEQUEL will use the SQE (SQL Query Engine, a faster, more efficient method to process the query) to construct a single query to collect data across multiple partition members and view it all in a unified environment.
These optimized querying capabilities diagnose data partitions based on how they are constructed and indexed, allowing Database Administrators to retrieve only the records that match up across the targeted destinations. For IBM i users that currently lack access to this amenity, the workaround has been a considerable source of frustration. Traditionally, a temporary work file would have to be created in order to hold all of the records from each individual request. Then, assuming it was compact enough to meet system parameters, that file would have to be analyzed once again.
SEQUEL cuts out the intermediate steps and provides incredibly fast access to crucial data points. SEQUEL functionality ensures that programmers, and end users alike, will no longer face a stack of cleansing and reformatting requests before they can turn their data into actionable decisions. From the outset, queries are designed to return intuitively displayed results that clearly demonstrate the solution’s ROI.
All of these newfound capabilities do not come at the expense of reliable baseline performance, however. SEQUEL is designed to convert existing queries into compatible objects to nullify redundant programming. What's more, reports can be converted and easily enhanced for viewing on a variety of platforms. As a result, fulfilling a request for an accounting executive who relies on an email with an attached Excel document each week is performed with comparable ease while still serving the same needs for a traveling salesperson who demands remote, browser-based intelligence.
With capacity planning cured, and migration and compatibility concerns nullified, IBM i users may find that SEQUEL is actually their silver bullet to sustainable data management, analysis enablement, and fast, efficient delivery.