Forum

Border Control - Th...
 
Notifications
Clear all

Border Control - The case for Input/Output Servers

1 Posts
1 Users
2 Reactions
90 Views
Tommy Atkins
Posts: 12
Administrator Admin
Topic starter
 

Introduction

This post is a somewhat belated follow-up of a previous post, entitled “Pulling the DB2 Trigger – Without shooting yourself in the foot” which may be found at

https://www.ile-rpg.org/forum/event-triggers/pulling-the-db2-trigger-without-shooting-yourself-in-the-foot/

Please take time to read the above article as it will provide background information relevant to this post.

Although this was written some time ago I was prompted to post it at this time by the posting of Neil Woodhams article on a related topic:
https://www.linkedin.com/posts/neil-woodhams-3b333515_field-resizing-conundrum-ibm-i-with-a-legacy-activity-7216449768384811010-F10G?

Much has been written about the pros and cons of the process of modernization (Renovation) of heritage systems by separating the database, business logic and user interface usually referred to as the MVC Model.

This involves creating a normalized, relational database (DB2) and utilizing the functionality provided in the DB2 for i (Constraints, Triggers, Journaling and Commitment Control) to include as much code and control as possible to allow the database to be self-managing and to provide a base onto which data-centric applications can be built.

The Heritage Conundrum

Heritage systems have been developed over many years, mostly under the control of the programming staff and therefore the validation, management and control of the database is deeply woven into the fabric of the application code itself and thousands of programs may touch thousands of files.

The difficulty, which many, if not most, installations have is how to achieve the ideal DB2 normalized, relational and self-managing database without having every system “crashing” every time a change to the database is made or a validation rule or constraint is added to a file.

The other major consideration is the thought of having to go back and retro-fit old code to be able to work with the newer paradigm introduced by the process of modernization. This process is repetitive, time consuming and provides little, if any, visible return on investment in the short term. It does however impact severely on the organizations ability to enhance and maintain the existing system and hampers the business.

And herein lies the conundrum. If the enterprise systems were modernized and utilizing a fully data-centric database, with all its benefits of added agility, single usage code blocks, simplified business logic, etc. then there would be enough time to do as suggested above, which would then, of course, no longer be necessary.

This then begs the question. “We have been doing what we are doing now quite successfully for the last 30+ years. Why not carry on the same way into the future.”

Heritage code was often built in a less than efficient manner due to numerous factors, such as having to use the RPG cycle because no other way was available. A lot of functionality, available now, was not available in the OS 30 years ago and therefore code was a lot more complex and extensive than it would need to be now.

Over the years, many programmers with many different ideas and methods have continued along the same basic path while introducing components of new functionality without a complete review of the code requirements and therefore the coding has become more complex as time has gone by and will continue to do so as long as this pattern is allowed to continue, causing the entire application to become more brittle and requiring more and more effort to keep it performing as required by the organization.

It will eventually break, but at what expense, who knows!

Protecting the Database

I honestly believe that no businessman, big or small, would dispute the value of the data contained in the company’s database. The current data plus the years of historical information contained in the database is the basis on which day-to-day decisions are made, that keep the company growing and profitable.

Another consideration is the possibility of theft or corruption of information in the database. This can seriously affect the profitability and even viability of a business if stolen information should fall into the wrong hands.

The above considerations clearly emphasize that the need to protect the database and its data should be of paramount importance to any organization.

 “Back in the day”, and here I am talking about the S/3x and AS/400, where most of these heritage systems first saw the light of day, there was no way to alter data in the database except by using an officially developed and (hopefully) tested application program, using a 5250 (green) screen, which application (also hopefully) did all the necessary validations and relationship checking required to ensure the accuracy of the data.

I will mention here (in a whisper please), in case you thought I had forgotten it, DFU and its 3rd party sibling DBU, which allowed people to change data in the database directly.

Apart from these there was no other way!

At the current time there are an increasing number of tools that can be used to access data in the database directly and which bypass the built in validations and controls of the official application programs. This can be a serious exposure for an organization as a result of deliberate hacking or accidental corruption of data by somebody pushing the wrong button.

In addition, more and more organizations are using 3rd party software, often developed for the PC platform, which interact with the organizations database and are completely unaware and unconcerned about the validations and controls built into the official application systems. These are also a potentially significant source of corruption to the data in the database.

Any businessman who knows that he is basing critical decisions about the well-being and growth of the organization on information obtained from data that may or may not be invalid, corrupt or just wrong would almost certainly want to ensure that the quality of the data be provably improved so that the percentage of possible errors be brought to as close to zero as possible.

Using the object authority, user/group profiles and authorization lists for control of access to the database tables, where thousands of files and hundreds of users are involved, is a monumental task, open to errors, to say the least. Using this authority capability of the platform cannot prevent a user, authorized to a D/B file for the use of an “official” program, using a different client to bypass the applications validation rules and corrupting data.

These concerns and issues described above are amongst the most significant considerations which have given rise to the concept of separating the database from the application system code (MVC) and placing the validation rules, relationships, synchronization and control of access into the database and thereby making it as self-aware and self-managing, in as far as is possible to do so.

Border Control via I/O Servers

To truly separate the database from applications and other client products requires providing an abstraction layer made up of modules/procedures which provide input/output services to the application code on behalf of the database files, both physical and logical.

Ideally, every physical file/table, logical file, index or view (collectively “files” from here on) should have a separate I/O Server, typically a module containing all required/authorized procedures, compiled and then bound into a service program along with a number of other servers.

Once each file, or at the very least, the critical files have been provided with I/O Servers, the Database can be closed down to one of 2 levels of access, in 3 easy steps, using the “object authority” functionality, provided by the O/S, as follows;

  1. All files and all I/O Server service programs should have their ownership changed to a single profile, DBOWNER, for example, which has no log on capability.
  2. All files should have all other authorities removed and the *PUBLIC authority changed to either *EXCLUDE (Level 1) or *USE (Level 2)
  3. All I/O Server service programs should be compiled to adopt owner authority (DBOWNER) and have *PUBLIC authority set to *USE. No other authorities must be defined.

This closes the access to the database by all clients, other than I/O Servers, and as long as new files and servers adopt the same standards, no further maintenance is required.

LEVEL 1: Excludes all access, both Read, Insert, Update & Delete from all clients other than the I/O Servers.

LEVEL 2: Excludes Insert, Update & Delete access from all clients other than the I/O Servers. Read access is still available for any other clients.

It must be noted here that use of the *ALLOBJ user profile authority will negate the closure action accomplished by the above, for those users. An audit of the user profile authorities in the “Production” environment is essential to identify the risk potential and reduce it wherever possible.

There is a method of eliminating the *ALLOBJ risk completely by using a “Call Stack” routine (mentioned in the trigger post, link above) in the *BEFORE trigger program of the file, which identifies the caller of the event. If the caller of the event is not the assigned I/O Server, or other approved program, the action can be rejected even if the user has *ALLOBJ authority.

This collection of I/O Servers, bound into service programs, effectively “ring-fences” all or part of the database as the initial protection and separation of the D/B from the rest of the application code and interfaces. This process can begin with as few as one file and gradually be expanded, over time, to include all the files in the database at minimal risk to the organization and its applications.

A Balancing Act

It is an absolute fact that introducing even one line of code between the application and the DBMS will create additional overhead and will therefore negatively impact performance.

This would include Constraints, Triggers, Journaling, Commitment Control and I/O Servers. All of these are elements of a properly constituted and ring-fenced DB2 normalized relational database, which is, by definition, the goal of modernizing a heritage system and specifically its database.

As I/O Servers are the primary topic of this article, let’s address them only and leave the other topics for other forums.

Almost all solutions within the computing environment are not perfect or ideal, but are rather a balance between the advantages vs. the disadvantages of a particular chosen method for solving the problem. Also the balance required may not be achieved on the first pass of implementation and may need to be tuned after the results become apparent. Additionally the circumstances of both the organization and application requirements may change over time requiring adjustment to the balancing of the Pro’s and Con’s of any solution.

The above being said, it is vital, when designing an I/O Server solution, to ensure that the design and functionality can be changed, incrementally, without creating high-risk issues for the entire application. This is where the philosophy of procedures and processes encapsulated into single instances of re-usable code becomes extremely beneficial.

This philosophy supports the creation of an I/O Server for each file (including logicals) in the database. This allows for tuning and adjustments at the individual file level without the complications introduced by conditioned responses which would become necessary if many files were serviced by the same I/O Server.

When making a decision regarding the final compromise between advantages and disadvantages in adopting a solution, it is clear that the final solution must come down on the side of the greater advantage, even if only slightly.

So when looking at the I/O Server per file solution proposal, we have; variable negative performance impact (disadvantage) vs. a ring-fenced, secure database (advantage). I feel that, even without listing the dozen or so additional advantages of the I/O Server paradigm the scale still falls on the advantage side.

 
Posted : 17/07/2024 12:42 pm