How can we help you?
We have hundreds of highly-qualified, experienced experts working in 70+ technologies.
IBM i e-Book
A Developer’s Guide to Mastering IBM i Concepts
IBM i Index
IBM i History and Overview
System Architecture
Development Tools
Deep Dive into DDS and DDL
Control Language (CL)
Report Program Generator (RPG)
Integrated Language Environment
SQL on IBM i
Jobs & Logs
IBM i History and Overview
IBM i History
1978: Introduction of System/38 Architecture
IBM unveiled the System/38 architecture, representing a significant leap in midrange computing technology. It introduced the concept of a single-level store, allowing seamless integration of programs and data. This innovation simplified data management and improved system efficiency, laying the foundation for future IBM midrange systems. System/38 was designed to be highly reliable and offered a unique approach to data storage and retrieval, setting new standards in the computing industry.
1983: Introduction of IBM System/36
IBM introduced the System/36, a midrange computer system tailored for small and medium-sized businesses. It provided integrated solutions for business applications, combining hardware and software to streamline computing processes. System/36 was designed with ease of use in mind, featuring a user-friendly interface and pre-integrated software packages. It offered flexibility, allowing businesses to adapt to changing computing needs and laid the groundwork for IBM’s future midrange systems.
1980s: The Origin of AS/400
In the mid-1980s, IBM revolutionized business computing with the introduction of the AS/400, a versatile and powerful midrange computer system. Its operating system, OS/400, was meticulously designed to seamlessly integrate with the AS/400‘s hardware, providing businesses with a robust computing solution. During this period, IBM established a foundation for what would become a long-lasting legacy in the world of enterprise computing.
1990s: Rebranding and Advancements
As the 1990s unfolded, IBM faced the challenge of rebranding its AS/400 product line to align with the evolving technological landscape. In the mid-1990s, the introduction of the “eServer” initiative led to the rebranding of AS/400 as eServer iSeries. Despite the rebranding, the core essence of the system remained intact, emphasizing reliability, scalability, and seamless integration. OS/400, the operating system, continued to evolve, incorporating advancements that catered to the expanding needs of businesses in an increasingly digital world.
Early 2000s: Transition to i5/OS
The early 2000s marked a significant transformation for IBM’s midrange systems. In 2004, IBM rebranded the eServer iSeries as eServer i5, emphasizing the utilization of POWER5 processors, indicative of a leap in processing power. Concurrently, the operating system was renamed from OS/400 to i5/OS. This change was more than cosmetic; it represented a deep integration of the operating system with IBM’s advanced hardware technologies, enabling businesses to handle complex tasks with increased efficiency and speed.
Mid-2000s: Becoming IBM i
By the mid-2000s, IBM recognized the need for a more unified approach to its midrange systems. In 2006, IBM rebranded its product line as System i, emphasizing the system’s integration capabilities and versatility. However, the most profound change occurred in 2008, when IBM merged the System i with the System p platform, forming IBM Power Systems. This amalgamation led to the renaming of the operating system from i5/OS to IBM i, symbolizing a broader, more encompassing vision beyond specific processor technologies. IBM i became a testament to IBM’s commitment to providing businesses with an all-encompassing computing solution that could adapt to various needs seamlessly.
Introducing Version Naming: IBM i 5.4 and 6.1
With the rebranding to IBM i, IBM simplified the versioning system. The complex Version, Release, Modification scheme was replaced with a more straightforward Version. Release format. This change not only streamlined the naming conventions but also reflected IBM’s focus on clarity and accessibility. Versions like i5/OS V5R4 and V6R1 were transformed into IBM i 5.4 and 6.1, respectively, signifying a more user-friendly approach to understanding the system’s progression.
Below image shows Main Menu of IBM i 7.1, shown inside a TN5250 client
Technology Refreshes and Continued Advancements
In the time of IBM i 7.1 and beyond, IBM introduced a novel concept – Technology Refreshes. These updates allowed for the introduction of new features and enhanced hardware support through optional updates, delivered as Program Temporary Fixes (PTFs) for specific releases. This approach provided businesses with the flexibility to tailor their systems according to their unique requirements. IBM’s commitment to continuous improvement and adaptability was evident through these Technology Refreshes, ensuring that IBM i remained at the forefront of innovation in the ever-changing landscape of enterprise computing.
PTFs are used to fix bugs, apply updates, or install new features in the operating system and related software. They are essentially patches or fixes provided by IBM to address specific issues or enhance system functionality. To apply PTFs, you typically use the IBM Navigator for i or the command line with commands like ‘APYPTF’. Keep in mind that the process may vary based on your specific IBM i version and configuration.
Command ‘DSPSFWRSC’ shows infoprint of PTF.
Below Diagram shows Roadmap OF IBM i History
Conclusion: A Legacy of Adaptation
The evolution of IBM’s midrange systems, from the pioneering days of AS/400 to the sophisticated era of IBM i, represents a legacy of adaptation and innovation. Each rebranding effort and technological advancement were not merely a change in name but a strategic response to the evolving needs of businesses worldwide. The transition from AS/400 to IBM i is a testament to IBM’s enduring commitment to providing cutting-edge solutions that seamlessly integrate with advancing technologies. The legacy of IBM i continues to shape the future of enterprise computing, embodying a tradition of excellence and adaptability that defines IBM’s unparalleled contribution to the world of technology.
Upcoming Developments and Improvements in IBM i
Over the years, IBM i, formerly known as AS/400, has experienced revolutionary changes. It used to be mostly linked with using RPG and COBOL programs to execute business-related applications. But there has been a striking shift in recent years toward modernity and adaptation.
By supporting modern languages like Python and Node.js, IBM i has embraced openness in the world of programming languages. This growth enables companies to upgrade their apps and access a larger talent pool. The platform has also evolved to be more cloud-friendly, allowing for simple interaction with hybrid cloud settings. This is in line with the industry’s transition to cloud computing, allowing businesses to take advantage of scalability and flexibility.
Security has been boosted to protect against cyber threats, and it can help harness the power of data for better decision-making using artificial intelligence. IBM is actively updating and supporting IBM i to ensure it remains relevant and robust. In essence, it has modernized to fit into today’s tech landscape while preserving its core strengths.
Here are some general areas where you might expect upcoming developments and improvements:
-
Open Hybrid Cloud Platform:
Integration: IBM was working on integrating IBM i with cloud services, enabling seamless integration with hybrid cloud environments. This is a crucial step in building an open hybrid cloud platform.
Open-Source Integration: The incorporation of open-source technologies like Python, Node.js, and PHP allows for greater flexibility in developing cloud-native applications and services that can run both on-premises and in the cloud.
Containerization: Exploring containerization technologies like Docker and Kubernetes can help facilitate the deployment and management of applications across hybrid cloud environments, making it easier to move workloads between on-premises and cloud infrastructures.
-
AI (Artificial Intelligence)
AI and Analytics: IBM was exploring the integration of AI and analytics capabilities into IBM i. This development could enable businesses to leverage AI for data analysis, predictions, and automation of tasks, enhancing their AI-driven applications.
Performance Improvements: AI workloads often require substantial computational power. The performance improvements in IBM i would be beneficial for running AI workloads efficiently.
Data Management: Enhancements to the Db2 for i database system can support AI initiatives by providing a robust and efficient platform for managing and analyzing data, a fundamental aspect of AI.
-
Modernization:
In the context of IBM I, it refers to efforts aimed at updating and improving the user interface and overall user experience of the IBM I platform.
Graphical User Interface (GUI): In the past, IBM i was renowned for its text-based, still-common green screen interface. However, modernization initiatives call for the development of GUIs that are easier to use and more aesthetically pleasing. These GUIs are made to facilitate user interaction with the system, task completion, and information access.
Web-Based Interfaces: Creating web-based interfaces to access IBM i operations and data is a common component of modernization projects. Web interfaces are accessible from a variety of devices having web browsers, such as desktop computers, tablets, and smartphones, and they are platform independent. More flexibility and accessibility are offered by this method.
Responsive Design: Modern user interfaces frequently use responsive design principles to make sure the interface adjusts to various screen sizes and devices. The user experience is improved across several platforms as a result.
System Architecture
Object-Based System
This is possible because any component of executable code or data in file is encapsulated into a secure unit called Object.
Anything available in this system is an object. This provides a common interface to work through them.
This interface allows for standardized commands across different system elements. Each and every object are referenced by a library, under which they reside. Library in IBM i system is also an object. They act as a container for all other objects.There are a set of system libraries supplied by IBM with the machine in OS. Any user created library resides under the system library QSYS.
Single-Level storage
Single level storage in IBM i refers to a single unit of storage, not discriminated as primary storage(RAM) and secondary storage unit(Disk).
The operating system has addresses of all the objects and data pointing to it in the single large pool of virtual storage(known as system ASP or System Auxiliary Storage Pool).
No additional I/O is required by the processor to access any object which is stored in disk, as it is done by other systems which have dedicated RAM and secondary storages.
This is achieved by an address translator. All the storage spaces get treated as single pool of data. This enhances the turnaround time for retrieval of data from table, as well as any object such as libraries, programs, modules, binding directories and so on.
This is an auto tuning feature in IBM i for memory management and disk pooling optimization.
Any add-on to the disk capacity is readily available with optimized feature of single level pool.
There is no need to be concerned about particular disk drives filling up, or moving data from one disk to another to improve performance because all data management is taken care of by the licensed internal code. Licensed internal code also ensures that there is no disk fragmentation.
It saves time and planning for users to allocate disk spaces, IBM i storage manager will do this for you automatically. In USA, almost 80% of business are saving $60K – $160K annually as pay for a part-time or full-time system admin consultant.
It costs less amount of money for business to purchase a disk unit for IBM i rather than buying segregated units for RAM and secondary disk separately. Note that under some circumstances you might create additional storage pools that are called user ASPs and independent ASPs.
Relational Database Integration
Every time a table insert, update, delete, or alter operation is performed, evaluation of all the conditions or rules mentioned in the integrity constraint will be done. The data can be inserted, updated, deleted, or altered only if the result of the constraint comes out to be True. By this, these are useful in preventing any damage to the database by an authorized user.
Types of integrity constraints:
- Domain Constraint
- Entity Constraint
- Referential Integrity Constraint
- Key Constraint
Domain Constraint
Domain constraints can be defined as a set of rules that are valid for an attribute. The domain’s data type includes character, integer, time, string etc. The value must be in the corresponding domain of the attribute.
Example:
Name | Class | Age |
---|---|---|
Prakash | 6 | 11 |
Ravi | 7 | 12 |
Rajesh | 6 | 11 |
Nikhil | 7B | 13 |
In the above table, we can see the Class column, the data type of the domain is an integer, but the attributes data type is a character. This is a violation, so it will not allow.
Here, we tried giving Class column value in Characters.
As we can see, it didn’t insert the record since the Class column is an Integer type.
Entity Constraint
Entity Integrity Constraint is used to ensure that the primary key cannot be null. A primary key is used to identify individual records in a table and if the primary key has a null value, then we can’t identify those records. In a relation, there can be null values, but they must be not the primary key.
Example:
Roll | No Name | Class |
---|---|---|
1 | Nikhil | 9 |
2 | Prasanth | 9 |
3 | Anil | 9 |
Siddharth | 9 |
In the above table, The Roll No column has the Null value in the last row. So, it cannot be assigned as the primary key.
Referential Integrity Constraint
Referential Integrity Constraint ensures that there must always exist a valid relationship between two relational database tables. This constraint is defined between two tables. This valid relationship between the two tables confirms that a foreign key exists in a table. It should always reference a corresponding attribute in the other table or be null.
Example:
Table A
Roll No | Name | Class | Subject Code |
---|---|---|---|
6 | Gowtham | 10 | 4243 |
7 | Chandu | 10 | 9876 |
8 | Naveen | 10 | 0123 |
9 | Rajeev | 10 | 8976 |
Table B
Subject Code | Subject |
---|---|
4243 | Maths |
9876 | Physics |
0567 | Chemistry |
8976 | Social |
Here, we can see that in Table A, Subject Code 0123 is not valid, as this value is not defined in the Table B and the column is assigned as the primary key, and Subject Code in table A is assigned as the Foreign Key.
Key Constraint
In Database, a key is used to uniquely identify an entity in an entity set. There could be multiple keys in a single entity set, but out of these multiple keys, only one key will be the primary key. A primary key can only contain unique and not null values in the relational database table.
Example:
Roll No | Name | Class |
---|---|---|
1 | Nikhil | 9 |
2 | Prasanth | 9 |
3 | Anil | 9 |
2 | Siddharth | 9 |
In the above table, Roll No cannot be defined as a primary key because it contains a duplicate value. That Roll No column row must contain unique values.
Libraries and Library List
Understanding the Role of Libraries in IBM i (formerly AS/400) Systems
In IBM i (formerly AS/400), a library is like a digital folder. It’s where you put similar things together, just like you’d organize your files in folders on your computer. These things can be programs, files, and more. Libraries keep everything neat and help you find what you need. When you want to use something, you say where it is by naming the library. So, libraries are like digital organizers that keep the IBM i system tidy and running smoothly.
In the AS/400 system, libraries, often marked as *LIB, are like virtual folders for organizing objects. Actual AS/400 objects aren’t stored inside libraries; libraries simply group objects together. So, it’s more about association than physical location.
Some AS/400 items, such as databases, storage, and programs, can exist in multiple libraries. When a program wants one of these items, it chooses the correct one from the Library List. Special commands can move items between libraries, and you can link items together when building a program to adjust how the Library List behaves when the program is in action.
Libraries cannot contain other libraries:
- Libraries contain various items. Libraries can’t hold other libraries, except for one special library called QSYS (System provided).
- AS/400 uses a list-like structure, not like Windows, which has a tree-like setup. To find something on AS/400, you need to know the library and the object’s name. Objects on AS/400 are identified by their qualified name, which looks like LIBRARY/OBJECT. For example, if you want to talk about the Employee file in the Company library, you’d say Company/Employee.
- Generally speaking, all libraries created by IBM for use by the operating system begin with the letter ‘Q’.
System Library/ IBM Standard Libraries:
In AS/400, a system library is like a top-secret storage place for important stuff that keeps the whole system running smoothly. This secret library contains essential programs, settings, and data. Here are the key points:
- It’s there by default: System libraries are part of the AS/400 setup, and you can’t easily change them.
- They’re locked down: Only trusted people can make changes to these libraries to protect the system.Examples: System libraries go by names like QSYS and QSYS2, with QSYS being especially important.
- They’re in control: These libraries are like the command centre, making sure everything in AS/400 works properly.
So, think of system libraries as the secret backbone that keeps AS/400 going strong.
-
System Libraries:
- QSYS – System library for the AS/400. It contains the programs and other objects that make up the operating system. QSYS must exist on an AS/400 for the system to work. Other libraries on the AS/400 exist within the context of the QSYS library; it is the only library that can contain other libraries. A few special objects, such as user profiles and I/O configurations, can exist only within QSYS. You should never modify or delete any object within the QSYS library.
- QSYS2 – System Library for CPI’s.
- QHLPSYS – Contains on-line help information that is displayed when the Help key or the extended help function keys are pressed.
- QRECOVERY – Contains objects needed during a recovery procedure.
- QUSRSYS – The system user data library, contains various IBM-supplied objects needed for system functions as well as user message queues for holding messages.
- QTCP – TCP Connectivity Utilities.
- QSPL – holds the spooled, printed output pages that have not yet been printed.
- QAFP – Advanced Function Printing.
- QGPL – General Purpose Library that contains IBM-provided objects. The system places newly created objects that are not specifically placed in a distinct library in QGPL.
Note: It is considered as user defined library.
- QTEMP – Job specific temporary Library (deleted when the job ends), Each time a user signs on, the system creates a QTEMP library for this interactive job. If the user submits a job to the batch queue, another QTEMP library is created for the batch job.
-
User Libraries:
In IBM i (formerly AS/400), a user library is a personalized storage area for users. It’s like a digital room where they can put their own programs, files, and things. This way, users can keep their work separate from others and have their own special place for their stuff.
In many cases, the system administrator sets up user libraries. These are usually created to store the work of individual users. For instance, each programmer might have their personal user library. The administrator can create as many of these user libraries as needed, and the only restriction is the amount of available disk space on the system’s storage device (DASD).
CUR: The library you’re currently in.
The current library in IBM i is the first stop for finding objects a user needs. When a user creates objects and designates *CURLIB, those objects are stored in the current library. It’s a user-specific setting, making it easier to locate and access the user’s work. This setup streamlines the process of finding and using objects, providing a convenient way to organize and retrieve resources.
If the “Limit capabilities” setting in the user profile is set to *YES or *PARTIAL, the user can’t switch their current library.
Library Commands:
-
CRTLIB: (Create Library)
In IBM i (formerly AS/400), you can make a library by using the ‘CRTLIB’ command.
Type ‘CRTLIB’ and then the name you want for your library. For instance, if you want to create a library called PIOLIB,’ you’d do this:
Syntax:
CRTLIB PIOLIB
To open the library creation prompt, just type ‘CRTLIB’ on the command line and press F4. Then, provide the library name and a short description.
Library Type:We can choose ‘Library Type’ above, which can be either *PROD or *TEST, as shown in the CRTLIB screenshot. Let’s now talk about the meaning of each option separately.
- *PROD:When you make a library with CRTLIB, it’s usually set as a *PROD or production library by default. If it’s a *PROD library, it decides whether you can change or add data in the database files when you’re debugging your program.
If you set the ‘update production files’ (UPDPROD) parameter to *NO in the Start Debug (STRDBG) command, it means that you cannot modify database files in production libraries during debug mode. They can only be read but not updated.Syntax:STRDBG COMMAND
- *TEST:In Test libraries, you can make changes to all the objects while testing, even if you’ve set UPDPROD to *NO in the Start Debug (STRDBG) command.
- *PROD:When you make a library with CRTLIB, it’s usually set as a *PROD or production library by default. If it’s a *PROD library, it decides whether you can change or add data in the database files when you’re debugging your program.
-
CHGCURLIB: (Change Current Library)
The CHGCURLIB command changes the current library in the library list.
To switch the current library, use the “CHGCURLIB”/” CHGLIBL” command. The sole essential parameter for this command is the name of the library that will become the new current library. Keep in mind that the current library setting only applies to your current session. When you log off and sign in again, the library you set will not persist in your library list. Instead, it will be replaced by the default current library specified in your user profile.
If you want to review or modify your user profile, you can access it by entering “CHGPRF” and pressing F4. However, if you are new to this system, it’s recommended not to make any changes to your user profile unless you are familiar with the implications of doing so.
*CRTDFT: No library is the current entry of the library list. If objects are created into the current library, the QGPL library is used as the default.
Syntax:
CHGCURLIB CURLIB(PIOLIB)
-
CHGLIB: (Change Library)
The CHGLIB command is used to modify the type attribute, text description, default create authority value, and default object auditing value of a library.
The CHGLIB command mandates a single required parameter, LIB, which designates the library to undergo modification. The TYPE, TEXT, and CRTAUT parameters are employed to adjust the respective attributes of the library. As an illustration, to alter the TEXT attribute of PIOLIB, input the command CHGLIB LIB(PIOLIB) TEXT(New Library Description). To confirm that the alteration has been implemented, utilize the Display Library Description command, DSPLIBD, with the command DSPLIBD LIB(PIOLIB).
Syntax:
CHGLIB LIB(PIOLIB)
-
CLRLIB: (Clear Library)
When you clear a library with the Clear Library (CLRLIB) command, you delete objects in the library without deleting the library.
Syntax:
CLRLIB LIB(PIOLIB)
-
CPYLIB:
In IBM i systems (formerly known as AS/400), the ‘CPYLIB’ command serves the purpose of duplicating an entire library, encompassing all of its objects and associated data, to a different library. Here’s how you can utilize the ‘CPYLIB’ command:
Syntax:
CPYLIB FROMLIB(source-library) TOLIB(target-library)
FROMLIB: This parameter designates the source library from which you intend to replicate both objects and data.
TOLIB: This parameter identifies the destination library where you intend to establish a replica of the source library, encompassing all its objects and data.
For instance, if your aim is to duplicate all the contents of a library named ‘PIOLIB’ into a fresh library named ‘NEWLIB,’ you can achieve this by issuing the following command:
Syntax:
CPYLIB FROMLIB(PIOLIB) TOLIB(NEWLIB)
This command will replicate all the objects and data found in ‘PIOLIB’ within ‘NEWLIB,’ essentially producing an identical copy of the source library in the destination library.
Be aware that you must possess the required permissions and authority to execute the ‘CPYLIB’ command. Additionally, it’s important to remember that the command copies all objects and data from the source library to the target library, so exercise caution when using it.
-
DLTLIB: (Delete Library)
In AS400, there is a command called “DLTLIB” that serves the purpose of removing or erasing a library. Let me guide you through the process of utilizing the DLTLIB command in AS400:
Enter the DLTLIB command followed by the name of the library you want to delete. For example:Syntax:
DLTLIB PIOLIB
Note: In this example, replace PIOLIB with the name of the library you want to delete.
If the library contains objects (such as files, programs, etc.), you will be prompted to delete them as well. Confirm the deletion of the objects if needed.
Once confirmed, the library and its objects will be deleted.
Please be cautious when using the DLTLIB command, as it permanently deletes the library and all its contents. Ensure that you have proper authority and backup any important data before using this command. Deleting a library cannot be undone, so make sure you have a backup or no longer need the data within the library.
-
DSPLIB: (Display Library)
Within the AS400 environment, the DSPLIB command serves the purpose of providing a comprehensive view of a library’s details. To utilize this command effectively, follow these steps:
- Simply input the DSPLIB command, specifying the name of the library you wish to view.Syntax:
DSPLIB PIOLIB
- Replace PIOLIB with the name of the library you want to display information about.
- The system will provide a detailed list of information about the specified library, including its attributes, object counts, and other relevant details.
- You can scroll through the information using the page-up and page-down keys or by following the on-screen instructions.
The DSPLIB command allows you to review information about a library without making any changes to it. This can be useful for verifying the contents and attributes of a library before performing any operations on it.
- Simply input the DSPLIB command, specifying the name of the library you wish to view.Syntax:
-
DSPLIBD: (Display Library Description)
The DSPLIBD command allows you to view comprehensive information about a library. This information encompasses the library’s category, its associated Auxiliary Storage Pool (ASP) number, the ASP device name linked to the library, the default public authority for objects created within the library, the default auditing settings for objects created in the library, as well as a textual description of the library.
Required Parameter:
- Library: Indicate the name of the library for which information is being presented.
-
WRKLIB: (Work with Libraries)
The “WRKLIB” command in AS400 (IBM i) is used to display a list of libraries on the system. This command opens a work library list display that shows the names of libraries on the IBM i server, allowing you to browse and manage library-related tasks. You can use this command to view, create, delete, or perform other library-related operations.
-
EDTLIBL: (Edit Library List)
The “EDTLIBL” command in AS400 (IBM i) is used to edit the library list. The library list determines the order in which libraries are searched for objects in an AS400 environment. By using the “EDTLIBL” command, you can interactively modify the library list, which can be essential for controlling the search path for programs and objects in your system. This command provides a simple interface to add, remove, or reorder libraries in the library list, giving you control over the environment in which your AS400 applications run.
Source Files and Types
SOURCE PHYSICAL FILES
Source files or source physical files are essentially a container of repository that holds individual source code members.
Within a source physical file, each source code file is referred to as a ‘member’. The members are individual source code files that contain the actual code written in a specific programming language. It is a structured way to manage or organize the source code for programs, files, and other objects on the system.
COMMANDS USED
CRTSRCPF
CRTSRCPF is an IBM i command that is used to create the source physical file.
File: Specify the name of the source file which you want to create.
Library: Specify the library where the source file will be created.
Record length: Provide the record length of the source physical file i.e. the number of bytes in the length of records stored in the source physical file.
The record format of the source physical file contains three fields.
- Source sequence number
- Source statement
- Date
The default record length is 92 bytes. The source sequence number contains 6 bytes, Date contains 6 bytes, and the source statement contains 80 bytes.
Similarly, if user makes the record length as 112, the source statement will contain 100 bytes, 6 bytes will be for sequence number & 6 bytes for date.
STRPDM
Program development manager (PDM) can be started using the command STRPDM which shows a menu of options for the level on which the user wishes to work.
User can choose a particular option or directly enter a command for that menu.
To work with libraries user can choose option 1 or directly use WRKLIBPDM command.
The work with objects user can choose option 2 or directly use WRKOBJPDM command.
Similarly to work with members user can choose option 3 or directly use WRKMBRPDM command.
TYPES OF SOURCE FILES
While working with members the user can create a new member by pressing F6 key.
Here users can enter the name of the member/source that they want to create and source type for that member.
Source Types
The source types determine the syntax checker, prompting, and formatting that are used for that member.
Different source types serve different purposes. Different source types are used to organize and define various elements of a program or application.
Many source types can be used in IBM i. To check the entire list of source types supported by IBM i, the user can press F4 key on the source type parameter while creating a new member.
Below are the most frequently used Source types in the IBM i world.
PHYSICAL FILE (PF)
Source type: PF
Physical files define the structure and attributes of a physical database file.
These files are used to store and organize data records.
Object type: PF type sources are compiled using the CRTPF command and are created with *FILE type object.
LOGICAL FILE (LF)
Source type: LF
Logical files provide a logical view of one or more physical files.
They allow users to define alternate record selection criteria & and record sequences. It simplifies data access by specifying different key sequences or filtering criteria.
Object type: LF type sources are compiled using CRTLF command and are created with *FILE type object.
RPG PROGRAM (RPGLE or RPG)
Source type: RPG
RPG stands for report program generator.
RPG source type is used to define the logic and processing instructions for a program. RPGLE source is where you write the business logic, calculations, and data manipulation for your application.
Object type: RPG type sources are compiled using CRTRPGPGM command and are created with *PGM type object.
RPG ILE (RPGLE)
Source type: RPGLE
It is an ILE version of RPG where users can write programs and business logic in a more efficient and modular way which can reduce reusability.
Object type: RPGLE type sources are compiled using CRTBNDRPG command and are created with *PGM type object.
RPGLE type sources can also be compiled using the CRTRPGMOD command and are created with *MODULE type object.
SQLRPG PROGRAM (SQLRPG)
Source type: SQLRPG
SQLRPG source type is used to define the logic and processing instructions for a program. It is like the RPG program along with the ability to use embedded SQL operations within the same program. This way user can manipulate the data using SQL statements within their program.
Object type: SQLRPG type sources are compiled using CRTSQLRPG command and are created with *PGM type object.
SQLRPG ILE(SQLRPGLE)
Source type: SQLRPGLE
It is an ILE version of SQLRPG where users can write programs and business logic with the ability to use embedded SQL operations in a more efficient and modular way which can reduce the reusability.
Object type: SQLRPGLE type sources are compiled using the CRTSQLRPGI command and are created with (OBJTYPE) *PGM or *MODULE type object.
CONTROL LANGUAGE PROGRAMMING(CLP)
Source Type: CLP
CLP stands for Control language programming.
It allows users to write IBM i commands in a program along with the ability to include statements from compiled languages (like RPG, COBOL etc.) to perform specific calculations that will be executed more easily and efficiently.
Object type: CLP type sources are compiled using CRTCLPGM command and are created with *PGM type object.
CONTROL LANGUAGE ILE(CLLE)
Source Type: CLLE
CLLE stands for control language with language extension.
It is an ILE version of CLLE which allows the user to write programs in a modular way. It can be used when the user wants to invoke RPG procedures within a CL program.
Object type: CLLE type sources are compiled using CRTBNDCL command and are created with *PGM type object.
CLLE type sources can also be compiled using CRTCLMOD command and are created with *MODULE type object.
DISPLAY FILE(DSPF)
Source type: DSPF
Display files are used to define the layout and characteristics of interactive screens or user interfaces. They specify how data is presented to users and input is accepted.
Object type: DSPF type sources are compiled using the CRTDSPF command and are created with *FILE type object.
PRINTER FILE(PRTF)
Source type: PRTF
Printer files define the layout and formatting of output generated by your RPGLE programs. They specify how data should be printed on physical printers. It is used to format reports, labels, and other printed output produced by RPG/RPGLE programs.
Object type: PRTFF type sources are compiled using the CRTPRTF command and are created with *FILE type object.
TEXT(TXT)
Source Type: TXT
TXT source is used for including comments and documentation with your source code. It is not compiled or processed by IBM i compiler but serves as a readable note.
Users can use type TXT source members to write SQL queries, and then execute using RUNSQLSTM command.
QUERY(QRY)
Source Type: QRY
QRY source type member is used for creating and storing queries using query/400 language on the IBM i system. Query/400 is a language specifically designed for defining queries to extract, filter, and manipulate data from databases. These sources store definitions of queries that can be run interactively or as a part of a batch job process.
BOUND(BND)
Source type: BND
The binder language is used to define the binding source for ILE programs.
BND members include statements that specify attributes of a program such as a module that contains export names, the activation group it runs in, and other binding-related information.
A BND source might include statements to bind together different modules of an application, ensuring they work together cohesively when a program is executed.
COMMAND DEFINITION (CMD)
Source Type: CMD
CMD source members define custom commands, simplifying complex operations, and improving command line efficiency.
Users can encapsulate specific tasks and can be invoked from CL programs or the command line.
C Programming(C)
Source Type: C
C type source members allow users to write and then run programs written in C programming language in IBMi.
Object type: C type sources are compiled using CRTBNDC command and are created with *PGM type object.
C type sources can also be compiled using CRTCMOD command and are created with *MODULE type object.
CPP Programming(CPP)
Source Type: CPP
CPP type source members allow users to write and then run written in C++ programming language in IBM i.
Object type:CPP type sources are compiled using CRTBNDCPP command and are created with *PGM type object.
CPP type sources can also be compiled using CRTCPPMOD command and are created with *MODULE type object.
COBOL LANGUAGE(CBL)
Source Type: CBL
COBOL stands for common business-oriented language.
CBL source type members contain code written in COBOL a high level programming language designed for business applications.
COBOL is known for its readability and often used in legacy systems for financial and administrative applications.
Object type: CBL type sources are compiled using CRTCBLPGM command and are created with *PGM type object.
CPP type sources can also be compiled using CRTCPPMOD command and are created with *MODULE type object.
COBOL LANGUAGE BOUND(CBLLE)
Source Type: CBLLE
CBLLE source type members also contain code written in COBOL programming language, but they undergo a two-step process. First the source code is compiled into an intermediate form called ‘object module’ using the COBOL compiler. Then the object module is bound into executable programs.
Object type: CBLLE type sources are compiled using CRTBNDCBL command and are created with *PGM type object.
CBLLE type sources can also be compiled using CRTCBLMOD command and are created with *MODULE type object.
Integrated File System
Integrated File System (IFS) in AS/400 (IBM i) is a feature that allows you to work with different types of data, including text files, documents, and directories, in a similar way to traditional file systems on other operating systems. It acts as a bridge between the traditional database-centric world of IBM i and the more common file-based systems.
Why do we use IFS?
The IFS provides a common way to manage and access various types of files and data on your AS/400 system, making it easier to work with different types of information alongside your traditional database files. It’s like having a versatile file storage system within your AS/400 environment, making it more flexible and compatible with various file formats.
Structure of IFS on IBM i
The structure of IFS on IBM I consists of several key elements, including:
1. Libraries: These are collections of objects and files, typically accessed using the QSYS.LIB notation.
2. Files: Files within libraries store various types of data and can be accessed through the IFS.
3. Directories: Directories provide a way to organize and manage files and data within the IFS.
4. Stream Files and Objects: Stream files are a common type of data stored within directories and can be accessed via specified paths. Objects can also be accessed similarly.
5. Folders: These are used to organize and group documents and data in QDLS (Document Library System).
6. Documents: Documents within folders are part of the QDLS structure and are accessible to users.
The IFS allows for the structured organization of data, with libraries holding files and objects that can be accessed through specific paths and directories. Additionally, folders and documents in QDLS provide a further level of organization and accessibility for users.
Stream files
Stream files in AS/400’s Integrated File System (IFS) are a type of data storage used for unstructured content, such as text, documents, images, and more. They provide a flexible way to store and manage various file types alongside traditional database-centric data. Here are some key points about stream files in AS/400’s IFS:
1. Data Type: Stream files can store a wide range of data types, including plain text, binary data, documents (e.g., PDF, DOCX), images, and multimedia files.
2. Flexible Structure: Unlike traditional database files in AS/400, stream files have a flexible structure, making them suitable for various file formats and content.
3. Hierarchical Storage: Stream files are organized in a hierarchical directory structure similar to directories and subdirectories in a file system.
4. Access: Stream files can be accessed and managed through standard file operations and file protocols. This includes reading, writing, copying, moving, and deleting files.
5. Integration: The IFS allows you to seamlessly integrate stream files with traditional database files, making it easy to work with structured and unstructured data in the same environment.
6. File Formats: Stream files can be used to store files in various formats, making them suitable for a wide range of applications, including document management, web content, and more.
7. Security: AS/400 provides security features to control access to stream files, ensuring data confidentiality and integrity.
Stream files in AS/400’s IFS offer a versatile way to handle unstructured data, enabling businesses to manage a variety of file types within the same system, enhancing flexibility and compatibility.
File System | Description | Example |
Root File System (/) | The primary file system uses the forward slash (/) as the separator for directories and files. It’s the top-level directory in the IFS. | `/mydirectory/myfile.txt` |
QSYS.LIB File system | A library-based naming convention is often used to access objects and files associated with libraries on AS/400. This system follows the QSYS.LIB structure. | `QSYS.LIB/MYLIB.LIB/MYFILE.FILE` |
QDLS FILE System | The Document Library System (QDLS) file system for organizing documents and folders. It provides a hierarchical structure for managing documents. | `QDLS/MYFOLDER/MYDOCUMENT.PDF` |
QOPT File System | The QOPT file system is used for optional integrated file systems and may be used to customize the naming and organization of files. | Customized naming and organization rules |
These file systems offer different naming conventions and structures, allowing users to work with files and directories in a way that best suits their requirements. Each file system has its own set of rules and purposes, making it more convenient for managing and accessing data within the IFS on AS/400.
An IFS “Hello World” Application
Below are the simple RPGLE programs in Free and Fixed format to write Hello World as content in the stream file stored at IFS. Both the programs use the C APIs – open, write, and close to create and write a stream file in the IFS of the IBM I operating system.
First, let’s see the Free-format RPGLE program.
Now, let’s see the RPGLE program in Free format.
Output: Before program run
After program run
Common Commands for IFS in AS400
In AS/400 (IBM i), several commands are commonly used for various tasks related to working with objects, libraries, and data. Here’s a brief explanation of the commands you mentioned:
1. WRKLNK (Work with Links):
Syntax:
WRKLNK ‘/DIRECTORY_NAME/FILE_NAME’
– This command allows you to work with symbolic links in the Integrated File System (IFS) on the IBM I platform. Symbolic links are references to other files or directories, providing a convenient way to access or reference files in different locations.
2. DSPF (Display File):
Syntax:
DSPF ‘/ DIRECTORY_NAME/FILE_NAME’
– DSPF is not a specific AS/400 command. It could be an abbreviation for “Display File,” which typically refers to a display file used in programming on the IBM I platform. Display files are used to define the layout and functionality of screens in interactive applications.
3. EDTF (Edit File):
Syntax:
EDTF ‘/ DIRECTORY_NAME/FILE_NAME’
– EDTF is an AS/400 command used to edit source files. Source files contain program source code, scripts, or other text-based data. This command opens a source file for editing, allowing developers to make changes to the code.
4. CPYFRMIMPF (Copy from Import File):
Syntax:
CPYFRMIMPF FROMSTMF(‘/DIRECTORY_NAME/FILE_NAME’) TOFILE(LIBRARY_NAME/FILE_NAME) MBROPT(*REPLACE) RCDDLM(*CRLF)
– This command is used to copy data from an Import File (an external data file) into a database file on the IBM I system. It is often used for importing data from various sources, such as CSV files, into the system’s database files.
5. CPYTOIMPF (Copy to Import File):
Syntax:
CPYTOIMPF FROMFILE(LIBRARY_NAME/FILE_NAME)TOFILE(‘/DIRECTORY_NAME/FILE_NAME’) MBROPT(*REPLACE) RCDDLM(*CRLF) STRDLM(*NONE)
– CPYTOIMPF is the counterpart to CPYFRMIMPF. It is used to copy data from a database file on the IBM I system into an Import File (an external data file). This command is useful for exporting data from the system to other platforms or applications.
These commands are part of the standard set of commands available on the AS/400 (IBM i) platform and are used for managing and manipulating files, objects, and data.
Access IFS using BD2
You can access the IFS (Integrated File System) on an AS400 system by using DB2. Here are DB2 queries to read and write records in the IFS file.
Read Record from IFS
Run the below query:
SELECT CAST(LINE AS CHAR(50)) FROM
TABLE(QSYS2.IFS_READ('/home/piofile.csv))
Write a record to the IFS
Run the below query:
CALL QSYS2.IFS_WRITE('/home/piofile.csv,
'insert record in IFS file using SQL’,
OVERWRITE => 'APPEND',
END_OF_LINE => 'CRLF')
Development Tools
SEU (Source Entry Utility)
SEU, or Source Entry Utility, is a text-based editor and one of the essential components of the Application Development Tool Set (ADTS) in IBM i (formerly AS/400 or iSeries). It is used for creating, editing, and maintaining source code files in various programming languages, such as RPG (Report Program Generator), CL (Control Language), and COBOL, on the IBM i platform. Here is a detailed explanation of SEU within ADTS on IBM i:
User Interface:
SEU provides a character-based, green-screen interface for editing source code. It is a menu-driven tool, and developers interact with it through text-based commands and keyboard shortcuts.
Main Functions:
SEU primarily serves as a source code editor and offers several critical functions:
Creating Source Code: You can create new source code files from scratch using SEU by pressing F6 key in PDM under a source physical file.
Editing Source Code: Developers use SEU to open existing source code files for editing. It provides syntax highlighting and indentation to improve code readability.
Navigation: SEU allows easy navigation through the source code, including functions like finding text, moving between code sections, and jumping to specific line numbers.
Compilation Support: SEU integrates with the IBM i development environment, allowing you to compile and run programs directly from the editor.
Copy and Paste: Standard copy-and-paste functionality is supported, which is useful for code reuse and modification.
Print and Save: SEU enables you to print or save source code files, making it easier to document or share code.
Integration with Other Tools: It can be used in conjunction with other ADTS tools like SDA (Screen Design Aid) and PDM (Programmer’s Development Manager) to develop complete applications.
Programming Language Support:
SEU is versatile and supports multiple programming languages commonly used on IBM i, such as RPG, CL, COBOL, and more. It provides syntax highlighting and context-specific features for each language.
Customization:
You can customize SEU to match your preferences and coding standards. This includes defining keyboard shortcuts, configuring display options, and setting indentation rules.
Multiple Modes:
SEU operates in different modes depending on the type of source code being edited. For example, it has different modes for RPG, CL, and other languages, adapting its features and behaviour accordingly.
Version Control:
While SEU itself does not provide version control, it can be used in conjunction with external version control systems or practices to manage source code versions and changes.
Security and Access Control:
Access to SEU and the ability to modify source code files can be controlled through security settings, ensuring that only authorized users can make changes to code.
Documentation and Comments:
SEU allows developers to add comments and documentation within the source code, helping to explain the purpose and functionality of code segments.
Search and Replace:
SEU offers powerful search and replace capabilities, allowing developers to efficiently locate and modify code elements throughout the source code.
In summary, SEU is a crucial tool within the ADTS on IBM i for creating, editing, and maintaining source code files in various programming languages. Its text-based interface and integrated features make it a widely used and versatile tool for software development on the IBM i platform.
Free Research Preview.
Integration with Other Tools: It can be used in conjunction with other ADTS tools like SDA (Screen Design Aid) and PDM (Programmer’s Development Manager) to develop complete applications.
Programming Language Support:
SEU is versatile and supports multiple programming languages commonly used on IBM i, such as RPG, CL, COBOL, and more. It provides syntax highlighting and context-specific features for each language.
Customization:
You can customize SEU to match your preferences and coding standards. This includes defining keyboard shortcuts, configuring display options, and setting indentation rules.
Multiple Modes:
SEU operates in different modes depending on the type of source code being edited. For example, it has different modes for RPG, CL, and other languages, adapting its features and behaviour accordingly.
Version Control:
While SEU itself does not provide version control, it can be used in conjunction with external version control systems or practices to manage source code versions and changes.
Security and Access Control:
Access to SEU and the ability to modify source code files can be controlled through security settings, ensuring that only authorized users can make changes to code.
Documentation and Comments:
SEU allows developers to add comments and documentation within the source code, helping to explain the purpose and functionality of code segments.
Search and Replace:
SEU offers powerful search and replace capabilities, allowing developers to efficiently locate and modify code elements throughout the source code.
In summary, SEU is a crucial tool within the ADTS on IBM i for creating, editing, and maintaining source code files in various programming languages. Its text-based interface and integrated features make it a widely used and versatile tool for software development on the IBM i platform.
Program Development Manager
Managing PDM levels and Commands
Initiating, PDM (Program Development Manager) can be done through the versatile ‘STRPDM’ command, offering a comprehensive menu for users to select their preferred level of operation, whether it’s for library management, object manipulation, or member handling. Alternatively, users can employ specific commands tailored to their intended focus.
- WRKLIBPDM allows users to manage libraries efficiently within the PDM environment.
- WRKOBJPDM provides tools for effective object management in the PDM context.
- WRKMBRPDM offers specialized capabilities for handling file members seamlessly using PDM.
Work with Libraries
The Work with Libraries Using PDM (WRKLIBPDM) command allows you to work with a single library or multiple libraries. Using this command, you can bypass the Programming Development Manager (PDM) menu and the Specify Libraries to Work With display.
Steps:
- Type WRKLIBPDM on the command line and press F4.
- A prompt opens with multiple library selection options.
- *PRV: Continue working with the same library or libraries used in the previous WRKLIBPDM session.
- *LIBL: Operate on all libraries listed in the job’s library list.
- *USRLIBL: Focus on libraries within the user portion of the job’s library list.
- *ALL: Include all libraries, encompassing system (QSYS and QTEMP) and user libraries.
- *ALLUSR: Engage with all non-system libraries, encompassing user-created ones.
- *CURLIB: Concentrate on the current library for the job; if unspecified, defaults to QGPL.
- Select the library and press Enter.
Work with Objects
The Work with Objects Using PDM (WRKOBJPDM) command allows you to work with a list of objects in one library. Using this command, you can bypass the Programming Development Manager (PDM) menu and the Specify Objects to Work With display.
Steps:
- Type WRKOBJPDM and press F4.
- A prompt will open with four options.
- Library: Specifies the library that contains the objects you want to work with.
- Object: Specifies the library that contains the objects you want to work with.
- Object Type: Specify the object type for objects you want to work with.
- Object attribute: Specifies the object type for objects you want to work with.
Note: The *PRV setting in the “Library” field defaults to the user’s previously accessed library but can be overridden with a specific library name. The “Object” and “Object type” options further refine the displayed objects.
Below is the resulting screen after executing the WRKOBJPDM command with the library set to QGPL and *ALL selected for the other options.
By utilizing the ‘WRKOBJPDM’ command, you can facilitate the precise identification of source physical files within any designated library. In our scenario, we use the library TSTTXK along with the ‘WRKOBJPDM’ command to easily see the source physical files are displayed on screen.
Work with Members
The Work with Members Using PDM (WRKMBRPDM) command allows you to work with a list of members in one database file. The command WRKMBRPDM is used to see all the source members of a source physical files.
Steps:
- Type WRKMBRPDM and press F4.
- A prompt will open with 3 options.
- File: Specifies the database file that contains the members you want to work with. The file can be a source physical file or a data physical file.
- Member: Specifies the member or members you want to work with.
- Member type: Specifies the member type for members you want to work with.
- The screen resulting from command WRKOBJPDM, with the library specified as TSTTXK, and *ALL on the other ptions, is shown below.
STRPDM
To start PDM with STRPDM, you can follow these steps:
- Type STRPDM on any command line and press Enter.
- The PDM menu will be displayed, where you can choose the level at which you want to work libraries, objects, or members.
- Select the option that corresponds to your desired level and press Enter. In our case, we are taking 3 – work with members.
- You will see a screen where you can specify the library, object, or member name and type that you want to work with. You can also use wildcard (*) to match multiple names or types.
PDM Options
PDM offers a multitude of options for versatile libraries, object, and member management. Here are some common tasks you can perform:
- – Option 2: Edit members using the Source Entry Utility (SEU).
- – Option 4: Delete objects or members.
- – Option 5: Display objects or members.
- – Option 7: Rename objects or members.
- – Option 8: View attributes of objects or members.
- – Option 9: Work with user-defined options.
- – Option 14: Compile members using default commands.
- – Option 15: Copy objects or members.
- – Option 16: Promote objects or members to other libraries.
- – Option 18: Print objects or members.
You can also create your custom options to execute any desired command, such as adding a library to your library list or managing spooled files. These user-defined options can be stored in an option file, with the default being QAUOOPT in library QGPL. However, you have the flexibility to create and configure your own option file as needed.
Additional Commands and Options in PDM for AS400
In addition to the commonly used PDM (Program Development Manager) options and commands mentioned earlier, here are some other useful commands and options available in PDM on AS/400 (IBM i) that can assist developers and programmers in various tasks:
- Display Message (DSPMSG): View system and job-related messages to monitor system activity and diagnose issues.
- Work with Data Areas (WRKDTAARA): Manage data areas, which are objects used for storing and retrieving data in a specific format.
- Work with Data Queues (WRKDTAQ): The command WRKDTAQ is used to display the list of available data queues from one or more libraries.
- Work with Spool Files (WRKSPLF): Display and manage spool files generated by batch jobs and reports.
- Work with Service Programs (WRKSRVPGM): Manage service programs that contain reusable routines and procedures.
- Work with Device Files (WRKDEVD): Manage device files and configurations for printers and devices.
- Work with Job Logs (WRKJOBLOG): Access and review job logs for job-related messages and diagnostic information.
- Work with Printer Files (WRKOUTQ): Manage output queues and printer files, including starting, stopping, and managing print jobs.
- Work with Message Queues (WRKMSGQ): Handle message queues, view, and interact with messages in message queues.
- Work with Message Subfiles (WRKMSG): Manage message subfiles used for displaying messages within programs.
- Work with Service Entry Points (WRKSRVENT): List and manage service entry points used in service programs.
- Work with Data Files (WRKDBF): Interact with data files, including viewing and managing records within them.
- Work with Jobs (WRKJOB): Display information about active jobs and manage job-related tasks.
- Work with Job Queues (WRKJOBQ): Manage job queues and prioritize job processing.
- Work with Communications Resources (WRKCFGSTS): Review and configure communication resources and settings.
These commands, along with the PDM options mentioned earlier, provide a comprehensive set of tools for developers and administrators working on the AS/400 platform. They cover various aspects of system management, job control, data manipulation, and application development.
SDA (Screen Design Aid)
User Interface:
SDA is accessed through the IBM i green-screen interface, providing a menu-driven and text-based environment for designing screens. It offers a straightforward and interactive interface for developers.
Main Functions:
SDA is primarily used for designing and defining the layout of interactive display screens. These screens can be used for a variety of purposes, such as data entry, inquiry, reporting and more.
Key functions of SDA include:
Screen Design: Developers can create screens by defining fields, text, and other screen elements. SDA allows for specifying field attributes like size, position, data validation, and help text.
Screen Navigation:
You can define the flow of screens and how users navigate between them. This includes defining function keys for common actions (e.g. Save, Cancel, Next Page).
Field Validation: SDA supports defining validation rules for data entered by users, ensuring data accuracy and integrity.
Display File Compilation: Once screens are designed, SDA can compile them into display file source code that can be used in RPG (Report Program Generator) programs to interact with users.
Record-Level Access: Developers can specify how data is retrieved and updated from the underlying database files or tables when users interact with screens.
Integration with Programming Languages:
Screens designed in SDA are typically used in RPG programs, but they can also be utilized in other languages like COBOL. SDA generates source code for the display files, making it easy to incorporate screens into application logic.
Customization:
SDA allows developers to create custom display formats that match the specific needs of their applications. You can define screen templates and layouts that are consistent with your organization’s design standards.
Graphics and Multimedia:
While SDA primarily deals with text-based screens, it does support limited graphics and multimedia elements, such as simple graphics and image placement.
Security and Access Control:
Access to SDA and the ability to modify display files can be controlled through security settings, ensuring that only authorized users can make changes to screen designs.
Documentation:
SDA provides options for adding comments and documentation within the screen design, helping developers understand the purpose and functionality of each screen.
Testing and Simulation:
SDA includes testing and simulation features, allowing developers to preview how screens will appear and behave before they are integrated into applications.
In summary, SDA is a valuable tool within the ADTS on IBM i for designing interactive display screens used in a wide range of applications. It simplifies the process of screen design, navigation, and integration with programming languages, making it an essential component for creating user-friendly and efficient IBM i applications.
Deep Dive into DDS and DDL
Externally Described Files
descriptions on input and/or output specifications within the RPG source member.
Externally described files offer the following advantages:
- Code efficiency in RPG/400 programs. If the same file is used by many programs, the fields can be defined once to the OS/400 system and used by all the programs. This practice eliminates the need to code input and output specifications for RPG/400 programs that use externally described files.
- Requires less maintenance when the file’s record format is changed. You can often update programs by changing the file’s record format and then recompiling the programs that use the files without changing any coding in the program.
- Improved documentation because programs using the same files use consistent record-format and field names
- Improved reliability. If level checking is specified, the RPG program will notify the user if there are changes in the external description.
Defining Externally Described Files
When defining an externally described file in fix format you have to provide file format as E whereas in free format you can use DISK(*EXT).
The information in this external description includes:
- File information like file type, attributes, access method (by key or relative record number)
- Record-format description, which includes the record format name and field descriptions (names, locations, and attributes).
Types of files that can be described externally in a RPGLE program
- Physical files (PF): A Physical file or PF is a data file in IBMi which contains actual data that is stored on the system and its description.
- Logical files (LF): A logical file contains a description of records of a Physical File. They do not contain actual data. It is a view or representation of one or more PFs. A LF cannot exists without a PF.
- Display files (DSPF): Display files used in IBMi to define the layout of the screens for interactive programs. It allows developers to design user interfaces, specifying screen formats, Input fields and output areas.
- Printer file (PRTF): Printer file is used to define the format of the output intended for printing or to show reports to user. PRTF specify the arrangement of text and data on printed documents and ensures proper format, presentation, and alignment.
Flat Files
A flat file is a special type of physical file that do not have any hierarchical structure or multiple records formats. It consists of a single field with long length (RCDLEN) which is defined at the time of creation. The maximum length of RCDLEN can be 32766 bytes.
Flat files have no field definitions and no indexes can be built off of them. In a flat file the file name, record format & field name are same.
We can write, read, update, delete the Flat file. Reading and deleting can be done normally whereas update and write operations required the use of Data Structures.
Usage
- Flat files are mostly used as Output file to copy the data to Stream file on IFS.
- Flat files are used to store data for Pre-Runtime Arrays.
Creating a flat file.
- Creating a flat file is same as creating Physical File.
- It is very easy to declare either in command line or even in DDS in STRSEU editor.
CRTPF (Library/FlatFileName) RCDLEN(50)
Figure 1 : Creating A Flat file
Figure 2 : Creating A Flat file
When we do DSPFFD (Display File Field Description) on a flat file. We can see that the file
name, record format name & field name is same for flat file. The record length provided at the time of creating flat file becomes its field length
Figure 3 : DSPFFD On Flat files
Operations on a flat file:
Reading a Flat File Using RPGLE PGM
Figure 4 : Reading A Flat file
- Line 2 & 3 : Included a flat file “FLATFILE” in program by declaring it with usage as input and PREFIX W_ to make field name as w_flatfile to the field and renamed the record Format from FLATFILE to Ffile with RENAME.(Note: We need to RENAME the record format to ignore compile-time severity 40 error
*RNF2109 And we need to add PREFIX to the field, otherwise, we will again get compile-time severity 30 error *RNF7503) - Line 4 and 5: are comment.
- Line 6: We have defined a variable FFvar with the same length as flat file RCDLEN.
- Line 9: We have set the pointer on the start RRN value on the flat file.
- Line 10: Read the record of a flat-file from start.
- Line 11 to 15: we have started a do-while loop till end of file is reached.
- Line 12: The records from flat file fields is assigned to the variable FFVar.
- Line 13: Displaying the data in FFvar.
- Line 14: Read the next data from flat file.
- Line 17: Set the last record indicator = *ON.
Writing data in a Flat File Using RPGLE PGM
Figure 5: Writing data into a flat file
- Line 2 & 3 : Included a flat file “FLATFILE” in program by declaring it with usage as input, output, update & delete and PREFIX W_ to make field name as w_flatfile to the field and renamed the record Format from FLATFILE to Ffile with RENAME.
- Line 6 : Declare a variable FFvar with same length as RCDLEN.
- Line 12 : Initialize the FFvar with the data.
- Line 13: Assigned the value of FFVar to the flat file field w_Flatfile.
- Line 14: Write the data on record format FFile using WRITE Opcode.
- Line 17: Set the last record indicator = *ON
Chain and Update on Flat File Using RPGLE PGM
Figure 6 : Chain and Update on a Flat file using RPGLE Program
Figure 7 : Data in Flat file before chain and Update
Figure 8 :Data in Flat file after chain and Update
- Line 2 & 3 : Included a flat file “FLATFILE” in program by declaring it with usage as
input, output, update & delete and PREFIX W_ to make field name as w_flatfile to the field and renamed the record Format from FLATFILE to Ffile with RENAME. - Line 5 : Declare a variable UpdVar with same length as RCDLEN.
- Line 8 : Initialize the UpdVar with the data.
- Line 9: Set the data pointer to 1 RRN with Chain operation on flat file.
- Line 10 – 13: this is an If-Elseif block which get executed when the chain operation has found data in flat
file. - Line 11 : Assigns the value in UpdVar to w_Flatfile field in flat file.
- Line 12: Update the flat file using UPDATE opcode and record format Ffile.
- Refer Figure 7 and Figure 8 for data in Flat file before and after Chain-Update operation.
- Line 15: Set the last record indicator = *ON
Chain and Delete on Flat File Using RPGLE PGM
Figure 9: Chain-Delete on a flat file using RPGLE program
- Line 2 & 3 : Included a flat file “FLATFILE” in program by declaring it with usage as input, output, update & delete and PREFIX W_ to make field name as
w_flatfile to the field and renamed the record Format from FLATFILE to Ffile with RENAME. - Line 5 : Declare a variable DelVar with same length as RCDLEN.
- Line 8 : Initialize the DelVar with the data.
- Line 9: Set the data pointer to 1 RRN with Chain operation on flat file.
- Line 10 – 13: this is an If-Elseif block which get executed when the chain operation has found data in flat
file. - Line 11 : Assigns the value in DelVar to w_Flatfile field in flat file.
- Line 12: Delete the flat file record using DELETE opcode and record format Ffile.
- Refer Figure 10 and Figure 11 for data in Flat file before and after Chain-Delete operation.
- Line 15: Set the last record indicator = *ON
Figure 10: Data in flat file before chain delete
Figure 11: Data in flat file after chain-delete
Physical and Logical Files
Introduction
- In AS400, a physical file is a fundamental database object used for storing structured data.
- It serves as a container for records, like a table in a relational database.
- Physical files are a key component that is widely used in IBM’s AS400 platform for data storage and retrieval.
Usage
- Physical files are used to store several types of data, including customer information, inventory records, financial data and more.
- They play a significant role in application development on the AS400 platform, enabling programs to read, write, update, and delete records efficiently.
- Physical files are also crucial for generating reports and performing data analysis.
Common Commands for Physical File in AS400
- Create Physical File: CRTPFThis command is used to create a new physical file.
- Reorganize Files: RGZPFMThe RGZPFM command is used to reorganize a physical file, optimizing its storage and performance.
- CHGPF: The CHGPF command is used to change the attributes of a Physical file. This command allows us to modify various properties of PF such as record format, record length, field definition etc.
Restrictions and Compatibility
- Record Length Each physical file can have a maximum record length, which may vary based on the AS400 model or OS version.
- Compatibility AS400 physical files are primarily designed for use within the AS400 environment. While there are methods to access AS400 data from other platforms, it may require additional integration work.
Examples:
To Create PF, press F6 and fill following details
Press Enter.
Press F4 on first line
For Record format, Type ‘R’ in Name type
For Field name, Name type will be blank.
For Key field, Type ‘K’ in Name Type.
After this provide the lengths and data type.
In function we will use the keywords like COLHDG for Column heading and TEXT for Text description.
Now the PF looks like this.
Now to Compile this PF we can either use opt 14 or can use ‘CRTPF’ command as shown below. . . . . . . . . . . .
Press Enter and PF will be compiled in the library.
UPDDTA
To add record in physical file we can use below command
UPDDTA PIOLIB/TESTPF
Or we can use SQL statements as well.
Strsql . . .. press enter . . .. write below query
Insert into piolib/testpf values (1,’name’,’city’)
Type of Entries
- Record level entries For a PF, the record format name is specified along with an optional text description. The record level entries can be FORMAT, TEXT.
FORMAT
This record-level keyword specifies that the record format being defined is to share the field specifications of a previously defined record format. The name of the record format being defined must be the name of the previously defined record format.
The format of this keyword is:
FORMAT (LIB-NAME / FILE-NAME)
TEXT
This record level keyword is used to supply a text description of the record format and it is used for documentation purposes only.
The format of this keyword is:
TEXT (‘description’)
- File Level Entries (Optional): UNIQUE: A record cannot be entered or copied into a file if its key value is same as the key value of a record already existing in the file.FIFO: The duplicate key records will be retrieved first in first out order.
LIFO: The duplicate key records will be retrieved in last in first out order.
FCFO: The duplicate key records will be retrieved in first changed first out order.
REF: This keyword is used to specify the name of the file from which the fields are taking definition.
Same syntax for FIFO (First In, First Out), LIFO and FCFO.
Syntax and how to use REF keyword:
Now we can refer to the file ‘TESTPF’ to take the field definition in the new file.
In Ref, will enter ‘R’ and in Function will use REFFLD (Referenced Field) with filed name and file name which we are referring.
- Field Level Entries:The Filed name and Field lengths are defined in this along with optional text description.For that we use ALIAS, COLHDG, DATFMT, EDTCDEEDTCDE, EDTWRD, REFFLD etc.
- Key Field Level Entries: The field names used as key fields are specified.
CHGPF
If we do not want to lose our data but want to compile source member, we can do it by using CHGPF command.
It is mostly used when we change the attribute of a field.
CHGPF FILE(PIOLIB/TESTPF) SRCFILE(PIOLIB/QDDSSRC) SRCMBR(TESTPF) MAXMBRS (2)
So here I have added maximum member to 2 using CHGPF command so now we can add one more member to same PF ‘TESTPF’.
ADDPFM
By using ADDPFM command we can add members to the physical file.
ADDPFM + F4 . . ..
ADDPFM FILE(PIOLIB/TESTPF) MBR(MEMBER2) TEXT (‘Test Member’)
Enter . . ..
Now one member is added in the physical file TESTPF.
To do RUNQRY on that particular member, write runqry on command line and press F4 and give that member name shown as below screenshot.
Logical File
Introduction
- Logical file provides a unique way of accessing data stored in Physical file.
- Logical files are used to define specific access paths or views to the data in a physical file, making it easier and more efficient to retrieve data for specific purposes.
- We can filter the data with criteria by using select and omit command.
- LF (Logical Files) does not contain any type data, it just provides the view of a physical file, hence it does not occupy any memory as well.
- More than one logical file can be derived from one physical file.
- Logical files can contain up to thirty-two record formats.
Common Commands
- CRTLF To create the Logical file, we can either do it by taking option fourteen or can use ‘CRTLF’ command.
- CHGLFIT is used to modify the attributes and definitions of a logical file.
- ADDLFMBy using ADDLFM command we can add members to the Logical file.
Types of Logical Files
- Non-Join Logical File
- Join Logical File
Type of Entries
- File Level Entries (Optional)REFACCPTH:Syntax:
Lib Name/Database Name
The access path information for this logical file is to be copied from another PF or LF.
DYNSLT:
This File level keyword indicates that the selection and omission specified in the file are done at processing time.
Dynamic select occurs whenever the program reads a file. But the access path occurs before the file is read (but not necessarily).
- Record Level EntryPFILE: In Record level entry, we defined the physical file whose data is going to be accessed by this Logical file.Format of the keyword is: PFILE (LIB name / PF name)
Other than there are 3 more Lev entries which are optional. Filed level entry, Key field level entry and selection/omission level entry.
Below is the screenshot to define the level of entries in Logical file. . ..
Examples
- Non-Join Logical FileHow to create a logical file . . . . . .
To Create LF, press F6 and fill following details
Press Enter.
Press F4 on first line
First will define the record format so type “R” in name type and give record format a name as shown in screenshot.
In functions will use the keyword PFILE with physical file library and physical file name.
After this Mention the fields which you want in this Logical file from that physical file.
Below is the screenshot for same:
Now to Compile this LF we can either use opt fourteen or can use ‘CRTLF’ command as shown below. . . . . . . . . . . .
CRTLF FILE(PIOLIB/TESTLF) SRCFILE(PIOLIB/QDDSSRC) SRCMBR(TESTLF)
- Select/Omit KeywordWe use SELECT/OMIT keyword to filter the record of physical file according to our need.We have one physical file ‘TESTPF’ as shown below with some recording. . ..
Now Create one logical file which refers to this physical file “TESTPF” with select/omit criteria shown as below screenshot. . . . . .
Below is the output screenshot.
- Join Logical File
- Join Logical file is a logical file that combines two or more than two physical files.
- In join logical file only one record format can be specified.
A. Type of Entries in Join Logical File
- i) File Level entries (Optional): ‘JDFTVAL’This Keyword in the LF is used to specify default values for fields that do not exist in one of the joined physical files.
- ii) Record Level entries: ‘JFILE’ This Keyword in the LF is used to specify the files to be joined.
- iii) Join Level entries: ‘JOIN,’ ‘JFLD,’ ‘JDUPSEQ’ JOIN is similar that this file level entries to represent the position of the files. There must be one primary file and can have more than one secondary file.FLD is used to specify join fields in a logical file.
JDUPSEQ This join–level keyword is used to specify the order in which records with duplicate join fields are presented when the JLF is read.
- iv) Filed Level Entries (Optional): ‘JREF,’ ‘DYNSLT,’ ‘RENAME,’ ‘ALL’ etc.JREF: we can use this field-level keyword in join logical files for fields whose names are specified in more than one physical file. This keyword identifies which physical file contains the field. We can specify either the physical file name or its relative file number.Some other level entries are same as non-join logical file i.e., Key filed level entries, select omit level entries.
Example:
Here we have two Physical files PF01 and PF02 with the following data. . . . . .
PF01
Output:
PF02
Output
LF01 (To join PF01 and PF02)
Output (JOIN of PF01 and PF02)
Field Reference Files
DDS Keywords:
- REF – This is a file level keyword.
- REFFLD – This is a field level keyword.
- FORMAT – This is a record-level keyword.
The above DDS keywords allow us to refer to a field description in an existing file.
Utilizing these keywords eliminates the necessity to repeatedly specify the field and its description when used in another file.
A) Using REF(Reference) keyword in Physical File (PF)
REF keyword is a file-level keyword in DDS for physical files.
This REF keyword can be used to specify files from where the field descriptions are to be retrieved in the current DDS PF.
Syntax:
- REF keyword with Library name(optional)and Filename and Record format name(optional).REF(LibraryName/FileName RecordFormatName)
- REF keyword with file nameREF(FileName)
- If you do not specify library name, then at the time of compilation *LIBL library list is searched for the file.
- If you do not specify the record format name, then each record format is searched sequentially in the file to find the field description.
Example of using REF keyword in DDS physical file:
- Suppose EMPLOYEE file is a Reference file and DDS is as follows in which all fields are declared with field name, field length, data type.
- Let’s create the ACCOUNT file that refers to the field description from Reference file EMPLOYEE using the REF keyword.
- The above DDS code can also be written below.
So, all the fields in ACCOUNT have the same field attributes as defined in EMPLOYEE file after using the REF keyword.
B) Using REFFLD(Referenced Field) keyword in Physical File
The REFFLD keyword is a field-level keyword in DDS Physical files.
This REFFLD keyword can be used to refer to field descriptions either from one file or multiple files.
Syntax:
- REFFLD with only Referenced Field Name when referring to the same DDS file field.REFFLD(ReferenceFieldName)
- REFFLD with Record format(optional) name and Referenced Field Name when referring to the same DDS file field.REFFLD(RecordFormatName/ReferenceFieldName)
- REFFLD with Referenced Field Name and File Name when referring to the different DDS file field.REFFLD(ReferenceFieldName FileName)
- REFFLD with Referenced Field Name and Library Name(optional) and File Name when referring to the different DDS file field.REFFLD(ReferenceFieldName LibraryName/FileName)
- REFFLD with Record Format Name(optional) and Referenced Field Name and Library Name(optional) and File Name when referring to the different DDS file field.REFFLD(RecordFormatName/ReferenceFieldName LibraryName/FileName)
Example of using REFFLD keyword in DDS physical file:
-
- Suppose EMPLOYEE File is a Reference file and DDS is as follows.
- Let’s create a file ACCOUNT2 that refers to the field description from Reference file EMPLOYEE and from the same file ACCOUNT2 using the REFFLD keyword.
- Here in the above example field ADDRESS1 is the field defined in ACCOUNT2 file itself.
- Field ADDRESS2 is referred from field ADDRESS1 in the same DDS ACCOUNT2.
- Field ADDRESS3 is again referred from the same field ADDRESS1 in the same DDS ACCOUNT2 only the record format name is used along with field name.
- Field ACC_ID is referred from field EMP_ID in file EMPLOYEE.
- Field ACC_NAME is referred from field EMP_NAME in file EMPLOYEE.
- Field FIELD is referred from field ADDRESS1 in the same DDS file ACCOUNT2.
- Suppose EMPLOYEE File is a Reference file and DDS is as follows.
C) Format keyword in Physical File
This record-level keyword is used to specify that this record format is to share the field specifications for a previously defined record format. The name of the record format you are defining must be the name of the previously defined record format.
FORMAT is required when you want to refer to an existing record format.
Syntax:
FORMAT([library-name/] database-file-name)
- The database-file-name parameter is required. It is the name of the physical or logical file from which the previously defined record format is taken.
- The library name is optional. If you do not specify the library-name, the library list (*LIBL) in effect at file creation time is used.
The FORMAT keyword is not valid in join logical files, and you cannot specify a join logical file as the parameter value on the FORMAT keyword.
Example:
- If you want to create a file with the same record format as another PF, you can use the FORMAT keyword below.
- Below is the DDS for ACCOUNT2 file (LF) having the same record format name ACCOUNTR as the file ACCOUNT.
- This means that the record format ACCOUNTR will have the same field names and attributes as the record format in the physical file ACCOUNT (mentioned in FORMAT keyword).
- You do not need to specify the field names and attributes in this LF.
- If necessary, you can Specify key specifications and select/omit specifications if you want them to be in effect for this file. (They can be the same as or different from the previously defined record format.)
- Below is the DDS for ACCOUNT having the same record format name ACCOUNTR.
- Below is the DDS for ACCOUNT2 having the same record format name ACCOUNTR.
Data Definition Language
- Data definition language (DDL) is described in the form of SQL which creates, alters, and deletes database objects.
- DDL is created by using the option F6. To compile the DDL source RUNSQLSTM command is used
- RUNSQLSTM SRCFILE(*lib/File) SRCMBR(*member) COMMIT(*NONE)
Create a DDL table: By statement ‘Create or replace table *tablename*’ can create a table.
RUNSQLSTM: This is the CL command used to run the SQL statements. To create a table from the DDL source RUNSQLSTM is used. Commit is the commitment control that determines the changes in file outcomes. If *CHG, *ALL is for commit then there will be a record lock that has happened for ALTER, INSERT, DELETE, DROP, etc., on that record of the file. If *None is mentioned then there will be no lock on that file.
View: The table can be viewed in 2 ways.
- STRSQL – by using the command “SELECT * FROM Lib/file”
- By using RUNQRY “RUNQRY *N Lib/file”
- Using WRKQRY
To view the table in WRKQRY, choose option 5 concerning query (file name) and Lib. Before that, the table should be created using option 1, and also select the fields for it. Then copying query also be done using option 3.
Insert: Inserting a row in a table can be done in two ways.
- By using STRSQL – “insert into lib/file values(fld1_val, fld2_val,….)”
- Using UPDDTA library/file.
Update: Update a row can be done in two ways.
- By using STRSQL – “update lib/file set fld1= ‘value’, fld2=’value’… Where fld = ‘value’”
- By using UPDDTA library/file.
Indexes: An index is similar to LF and simply a set of pointers used to point the rows in a table. These are used to speed up the data access. Mainly for partitioning the data as we need. This table can be viewed by the RUNQRY command.
Converting DDS to DDL source:
- DDL is in text format that can be generated as SQL after ‘RUNSQLSTM’.
- SQL statement with the equivalent of DDS source is to be created by ‘Create table’ or ‘Replace table’.
- If DDS has existing data needs to do ‘CHGPF’ to duplicate those data.
Usage:
- DDL is more stabilized compared to the DDS table.
- DDL is capable of doing all types of enhancement as did in SQL.
- The length of the field name is DDL has no limit compared with DDS which has a maximum of length 10. DDS field names or in a short form are not able to be understood by all.
- DDL can store both the Field name and the System name. For example, the field name is ‘EMP_name’ and the system name is ‘Name’.
Restrictions:
- The altering of the structure of the table may lead to errors. If it depends on some other file(constraints/integrity).
- Multi-format files are difficult to handle.
- Select/Omit is not in the DDL source for table. Yet, can be used as where condition for VIEWS/INDEXES.
- There is no exact match for DATFMT/DATSEP.
Examples:
- Here creating a table for ticket booking timestamp data types is used. Timestamp gives output as both date and time.
Primary Key is used to identify the particular unique key record in the file and it should not allow a null value.
Record format(RCDFMT) is mandatory for all files and when it is not used system generates a record format(RCDFMT) as a system value which is used for the input/output of the file.
The Rename keyword for the name of the system file name. If the rename keyword is not used then the same name is considered for the system name.
The Label keyword is used to describe the file for understanding. - Employee details with the comparison of DDL and DDS.
Label on Column:
The label on the command is used to display the field name in the output and for the references. To separate the field name should start after the 20th position. For example ’Employee name’ is represented below to separate ‘Employee’ and ‘Name’.
If we use the TEXT IS command for the own reference for the field.
Index: In DDL, index is used as part of LF from DDS source. The index is used to pinpoint the particular fields from the Table or PF. These tables are not able to be seen
Control Language (CL)
Operations and Functions
Built In Functions
Introduction:
Built-in function is a function that is already available in a programming language, application, or another tool that can be accessed by end users. The term “built-in” refers to the fact that these functions are part of the core functionality of the language. A variety of different office suites, business applications, and programming languages offer built-in functions to simplify the user experience. The following is a list of built-in functions that can be utilized in CL:
- %ADDRESS
Syntax ->%ADDRESS(variable name) / %ADDR(variable name)
- %BINARY
Syntax ->%BINARY(character-variable-name starting-position length)
- %CHECK
Syntax ->%CHECK(comparator-string base-string [starting-position])
- %CHECKR
Syntax ->%CHECKR(comparator-string base-string [starting-position])
- %OFFSET
Syntax ->%OFFSET(variable name)/ %OFS(variable name)
- %SCAN
Syntax ->%SCAN(search-argument source-string [starting-position])
- %SUBSTRING
Syntax ->%SUBSTRING(character-variable-name starting-position length)/
%SST(character-variable-name starting-position length)
- %SWITCH
Syntax ->%SWITCH(8-character-mask)
- %TRIM
Syntax ->%TRIM(character-variable-name [characters-to-trim])
- %TRIML
Syntax ->%TRIML(character-variable-name [characters-to-trim])
- %TRIMR
Syntax ->%TRIMR(character-variable-name [characters-to-trim])
- %CHAR
Syntax ->%CHAR(convert-argument)
- %DEC
Syntax ->%DEC(convert-argument [total-digits decimal-places])
- %INT
Syntax ->%INT(convert-argument)
- %LEN
Syntax ->%LEN(variable-argument)
- %LOWER
Syntax ->%LOWER(input-string [CCSID])
- %UPPER
Syntax ->%UPPER(input-string [CCSID])
- %PARMS
Syntax ->%PARMS()
- %SIZE
Syntax ->%SIZE(variable-argument)
- %UINT
Syntax ->%UINT(convert-argument) / %UNS(convert-argument)
Operators In CL
Introduction:
It is a symbol that usually represents an action or process. These symbols were adapted from mathematics and logic. An operator can manipulate a certain value or operand. For example, in 2 + 3, the 2 and 3 are the operands and + symbol is operator. Below is the list of common operators that can be utilized in CL:
Predefined value | Predefined symbol | Meaning | Type |
---|---|---|---|
+ | Addition | Arithmetic operator | |
– | Subtraction | Arithmetic operator | |
* | Multiplication | Arithmetic operator | |
/ | Division | Arithmetic operator | |
*CAT | || | Concatenation | Character string operator |
*BCAT | |> | Blank insertion with concatenation | Character string operator |
*TCAT | |< | Blank truncation with concatenation | Character string operator |
*AND | & | AND | Logical operator |
*OR | | | OR | Logical operator |
*NOT | ¬ | NOT | Logical operator |
*EQ | = | Equal | Relational operator |
*GT | > | Greater than | Relational operator |
*LT | < | Less than | Relational operator |
*GE | >= | Greater than or equal | Relational operator |
*LE | <= | Less than or equal | Relational operator |
*NE | ¬= | Not equal | Relational operator |
*NG | ¬> | Not greater than | Relational operator |
*NL | ¬< | Not less than | Relational operator |
File Operations
- Databases files
- Display files
We can send a display to a workstation and receive input from the workstation for use in the CL procedure or program, or we can read data from a database file for use in the CL procedure or program.
There are few important points related to variables used in CL:
1. | Datatypes used for CL variables are *CHAR,*DEC,*LGL,*INT,*UINT. |
2. | Variable names start with ‘&’, for example. &IN03, &Count etc. |
3. | DCL command is used to declare the variables. |
4. | Variables from display file will automatically be available to program. |
5. | CHGVAR command is used to assign values to the variables, for example, CHGVAR VAR(&Count) Value (2). |
There are some limitations in CL compared to RPGLE:
CL | RPGLE |
---|---|
1.It cannot be used to ADD or UPDATE database files, as it does not have WRITE or UPDATE opcodes like RPGLE. However, we can use RUNSQL command to perform these operations. | 1. It can be used to ADD or Update Database files. |
2. It does not support subfiles. But it does support one output message subfile. | 2. It support all types of subfiles. |
3.It does not support program described files. | 3. It does support program described files. |
4. It does not support printer files. | 4. It does support printer files. |
5. It does not support Indicator data structure. | 5. It does support indicator data structure. |
6. It can have only five files(Database and display files) per program. | 5. It can have maximum 50 files (Including 8 printer files). |
There are few important points related to usage of database files and display files in CL programs:
Database File | Display File |
---|---|
1. Only database files with a single record format may be used by a CL procedure or program. | 1. Display files may have up to 99 record formats. |
2. The files may be either physical or logical files, and a logical file may be defined over multiple physical file members. | 2. The file defined must be display file. |
3. Only input operations, with the RCVF command, are allowed. | 3. All data manipulation commands (SNDF, SNDRCVF, RCVF, ENDRCV and WAIT) are allowed for display files. |
4. DDS is not required to create a physical file which is referred to in a CL procedure or program. If DDS is not used to create a physical file, the file has a record format with the same name as the file, and there is one field in the record format with the same name as the file, and with the same length as the record length of the file (RCDLEN parameter of the CRTPF command). | 4. The display file must be defined with the DDS. |
5. The file need not have a member when it is created for the module or program. It must, however, have a member when the file is processed by the program. | 5.The display file must have a member when it is created for the module or program. |
6.The file is opened for input only when the first Receive File (RCVF) command is processed. The file must exist and have a member at that time. | 6. The display file is opened for both input and output when the first SNDF, SNDRCVF, or RCVF command is processed. |
7. The file remains open until the procedure or original program model (OPM) program returns or when the end of file is reached. When the end of file is reached, message CPF0864 is sent to the CL procedure or program, and additional operations are not allowed for the file. The procedure or program should monitor this message and take appropriate action when end of file is reached. | 7. The file remains open until the procedure or OPM program returns. |
In this, we will describe all the operations which we can perform on databases file. The operations include read, write, update, chain, set lower limit, these operations are equivalent to the RPG files operation and here we will see how we can implement these in CL programs.
We will also describe all the operations which we can perform on display files. We have also listed all the commands which we can use to handle the files in CL programs.
This will also include all the operations or commands related to file operation which are not supported by the CL program or procedure.
Usage:
The operations which we can perform on files:
Databases files:
In CL only input operation can be done on database files using RCVF command. The other operations like write or update on database files cannot be done in CL.
But in the below examples we will see how we can use alternative ways to perform operations like write, set lower limit, chain , update which we are doing in RPG same we can do in CL.
1)Read Operation.
To be able to read a file in a CL program,
- First, we must Declare the File using declare file command, DCLF. The file must exist before compilation of the CL program.
- Then use the Receive File command, RCVF to read or retrieve data from the file.
1.1) If we want to read the file in loop then, we can use DO loop:
- We have declared the file STUMASTER which we want to read.
- We are using Do loop to read the file, so it will read the file from the start.
- The MONMSG Message ID CPF0864 will tell the end of file is reached and then EXEC command will execute; and it will leave the loop.
1.2) Using Labels:
We have seen above that we have used Do loop for reading the file in loop. Alternative to loops, in CL we can use labels. We use GOTO command to move to the labels.
The below example shows how we can use labels for reading a file till the end of file is reached.
- READ and END are the labels.
- On line 4.01, it will read the record from file.
- On line 4.02, if we reach the end of file then Exec command execute, and it will send the program to the label END.
- On line 4.03, if we don’t reach the end of file then it will send the program to label READ to execute the RCVF command again.
1.3) If we want to read more than one file:
We need to use the Open File Identifier, OPNID, parameter to give each file its own unique id. We just use a letter, but the OPNID can be up to ten characters.
- When we use the RCVF we need to mention which file to use. The OPNID is used and must match the value in a file declaration.
- When we use the OPNID the fields’ name are automatically prefixed with the open identifier and an underscore (_). This ensures that the field names are unique.
We use PREFIX keyword in RPG, for that operation in CL we can use OPNID which will make each field names unique.
1.4) If we want to read a file from a particular record not from start:
This is like what we are doing in RPGLE like SETLL, CHAIN but in CL we don’t have these commands.
But for this we can use Override Database File command, OVRDBF.
In the below example we will see how we can do this,
- First, we have declared a database file.
- We assigned the value to variable &key on which we want to position the pointer.
- Then, we have used the OVRDBF command for overriding the declared file (STUMASTER) using the Starting Position in File parameter; POSITION will position the file pointer to that point in the file when we perform our first “read”. The four parts of this parameter are:
- Retrieve order – *KEY this means position the file pointer to the exact match on the key. The other options we can use are *KEYB, *KEYBE, *KEYA, *KEYAE, *RRN.
- Number of key fields – The file which we are reading that file has how many key fields. This file has two key fields.
- Record format with the key – by using *N we are telling the command to use the only member in the file.
- Key value – this can either be a variable, as we have shown, or we can enter a literal instead.
After positing the pointer at a specific record, we are reading file in a loop.
Note: OVRDBF is used to override the attribute of a physical file. This can make our program to use some other file for the one named to be used in the program.
All overrides are temporary and are effective until where the override command has been in scope.
The parameters of OVRDBF used in below example are;
- File: specify the file being declared in the program.
- Position: It tells the starting position for reading records from the database file. Possible values are *NONE, *START, *END, *RRN (provide relative record number i.e. nth record in file), record specified on key field value (*KEY, *KEYA, *KEYAE, *KEYB, *KEYBE).
- Ovrscope: It tells the scope of the override. There can be three possible values it can have:
- *ACTGRPDFN: The scope of the override is determined by the activation group of the program that calls this command. when default activation group then scope would be call level of calling program otherwise activation group of the calling program.
- *CALLLVL: The scope of the override is determined by the current call level. All open operations done at higher or same call level than the current call level are affected by this override.
- *JOB:The scope of the override is the job in which the override occurs.
2)Write Operation
In CL there is no write command. But we can use Run SQL Command to insert a record into a file.
In the below example we have used SQL query and run it using run SQL command to insert the data into file. In the below example we want to insert values in two fields so for that we have declared two variables &VAR1 and &VAR2 with values. These variables are used in SQL statement to insert value into table.
3) Update Operation:
CL does not have an Update command, so in CL we cannot update database files.
But for this, below we have shown an alternative way to update database files and for that we can use RUNSQL command.
Let’s discuss what we are doing:
- First, we have declared a file which we want to update.
- We have assigned the value in the variable (&Key), this we will use to update the record which matches this value.
- We are positioning the pointer to the &key value in the database file and for this we are overriding the declared file (STUMASTER) using the Starting Position In File parameter, POSITION, this will position the file pointer to that point in the file where we perform our “UPDATE”.
- After positioning at specific record, we can use RUNSQL command to update the record. Inside the RUNSQL command we can put the update SQL query.
4) Error handling for file operation in CL:
When RCVF command is used to read the declared file in CL, and if the file is empty or the end of file is reached it will throw a error if it is not handled.
First, we will see if we do not handle error then what happens.
Below is an example of the error.
- We are reading the file STUMASTER which is empty.
In the code below we are reading the file which is empty using RCVF command.
In the above example, if the file has some data , even then it will throw error when end of file is reached and we have not handled the end of file condition.
To handle the above errors we use MONMSG.
Now in the code,
- On line 4.02 we have defined the MONMSG with MSGID CPF0864 which tells the end of file is reached or file is empty and the EXEC command executed and it send the program to Endpgm and program ended normally.
To handle any error that occurred in a program we can define one MONMSG and it will handle at the program level.
Below is the example for that.
- On Line 2.05 we have defined MONMSG with MSGID CPF0000 which is generic message id and it will handle error at program level means any error occur in program it will handle.
- On Line 4.03 we have used SNDPGMMSG to send a message that error is occurred.
When we call the above code then, we got the message like below
5) Using Commit & Rollback in CL:
Commit means that the changes made in the current transaction are made permanent and Rollback means cancelling all the transactions made in the current transaction. With the help of these commands we can maintain the consistency in the data of files.
To use commit on a file the file must be journaled.
The commit and rollback is used in a block, which starts with a command STRCMTCTL (Start Commitment Control) and end by a command ENDCMTCTL ( End Commitment control) . Inside these we will perform the operation on file and use the commit or rollback as per requirement.
Let’s see how we can use commit and rollback in CL by a simple example-
In the below code we will first update the file STUMASTER, then we use commit to permanent the changes then we again update the file STUMASTER, then we use rollback to cancel the changes in the file.
Explanation:
- On Line 2.04, file STUMASTER is declared.
- On Line 2.08, STRCMTCTL is used with LCKLVL as *CHG which means any change in the file, and this will start the commitment control block.
- On Line 2.10, we position the pointer on the file equal to the key value.
- On line 2.12, we are reading the file using RCVF.
- On line 2.14, we are updating the value of field Class in file with 5 where Field Name = ‘CNTRL’.
- On line 2.20, we used Commit which makes the changes in the file permanent.
- On Line 2.21, we update the Field Class by 7 in file.
- On line 2.25, we used Rollback which means it will cancel the changes done in file just now.
After ROLLBACK, because of Rollback the update of Field ‘CLASS’ to 7 is cancelled and it will update by previous value.
On line 2.27 ENDCMTCTL (End Commitment Control) is used which end the commitment control block.
B).Display files:
Now let’s see how we can use display files in CL. For the display files three commands are basically used DCLF, SNDF, SNDRCVF. The SNDRCVF is like what we have in the RPGLE as EXFMT. Let’s see how we can use display files in CL using an example shown below.
The DDS of the display file named AIRTHDSP where we are finding the position of character from the entered string.
Now the program using above display file is:
In the program we can see,
- First using DCLF command we have declared the display file AIRTHDSP.
- Then we have used SNDRCVF which is used to receive and send data to display screen, so basically it will show the display screen.
- Then we have put the logic to perform the operation which we want to perform on display file.
Let’s see the result of the above example.
In the entered string we have to find the position of $ which is 26.
Commands that are used in CL for file handling:
1.DCLF : This command is used to declare a display or database file to your CL procedure or program. The Declare File (DCLF) command cannot be used to declare files such as tape, printer, and mixed files.
The file must exist before the module or program is compiled.
2. The only commands we can use with a display file to send or receive data in CL procedures and programs are the Send File (SNDF), Receive File (RCVF), and Send/Receive File (SNDRCVF) commands.
2.1.SNDF : The Send File (SNDF) command is used by a CL program or ILE CL procedure to send a record to a display device that is being used by an interactive user.
- This command is valid only within a CL program or ILE CL procedure.
- This command is valid only for display files.
- This command cannot be used with database files.
2.2.RCVF : The Receive File (RCVF) command is used by a CL program or ILE CL procedure to receive data from a display device or database file. The command reads a record from the file and puts the data from the record into one or more CL variables. The data that is entered by a user at the display or is contained in the input record is copied into CL variables in the program by the RCVF command, where it is processed by the program.
If the file has not been opened by a previous RCVF, SNDRCVF, or SNDF command, it is opened by this command. If the file has been previously closed due to an end-of-file condition on a previous RCVF command, an error occurs.
- SNDRCVF : The Send/Receive File (SNDRCVF) command is used by a CL program or ILE CL procedure to send data to and receive data from a device that is being used interactively by a user. If the device file has not been opened, it is opened by this command.
This command is valid only within a CL program or ILE CL procedure and only for display files. It cannot be used with database files.
Restrictions:
There are some restrictions or don’ts for files in CL, and those are listed below:
1. | The WAIT and DEV parameters on the Receive File (RCVF) command are not allowed for database files. In addition, the SNDF, SNDRCVF, and ENDRCV commands are not allowed for database files. |
2. | The CL does not support Indicator data structure, so in display file DDS INDARA keyword should not be used. |
3. | We don’t have write or update commands in CL for add or update of record in databases files, for this we have discussed the alternative above. |
4. | CL does not support subfiles , but a single output message subfile is a special type of subfile that is supported well in CL. |
5. | CL cannot use Program described files. |
6. | CL cannot use Printer files. |
7. | CL can have only five files (display or database file) per program. |
Code Example:
In the below example we will cover both database file and display file, what we have learned above.
In the example,
- First, we have a database file having four fields Enrollnum, Name, Batch, Department.
- What we have to do is first we have to create a display file and from the display file we have to enter the enrollment number for which we want the record, and if enrollment number found in file, then it should show the records related to that enrollment number on the screen.
The below screen shot is the display file DDS.
The below screen shot is the program for the above example.
Let’s discuss the above example code line by line.
- From line 2.01 to 2.05, we have declared the variables required in the code.
- On line 2.10 and 2.11, we have declared the display file and database file and because here two files are declared so we have used OPNID.
- Inside the loop on line 2.15, we have used SNDRCVF to read and write the display screen.
- On Line 2.25, we put a condition that if F3 is on then exit from screen.
- On Line 2.27 to 2.30, we have put a condition that if we have not entered the enrollment number on screen then it throw error.
- On line 2.32, we have assigned the enrollment number to the variable &key which is used to chain this value on database file.
- On line 2.39 to 2.40, we have used OVRDBF command to position the pointer on the record of database file which matches with the &key value.
- On Line 2.42, we have used RCVF to read the matched record from the database file.
- On line 2.43 to 2.46, we have put logic to manage the error.
- On line 2.49 to 2.52, we are assigning the values to the display screen fields from the database file fields. One important thing here is that on each database file fields we have prefixed the OPNID value.
- On line 2.56, we have used RCLRSC which is used to make sure that every thing is closed and override cleared properly.
Now let’s see the output we will get when we run the above example.
Error & Message Handling
These errors can be categorized as program-defined errors and system errors. Program-defined errors are those you anticipate and handle within your CL program, while system errors are unforeseen issues that may require system messages.
To Handle Errors and Messages in IBM i MONMSG and SNDPGMMSG are the basic command available.
The MONMSG command is a fundamental construct for error handling in CL programs. It is used to monitor specific messages and take actions when those messages occur.
In error handling, it is common to display error messages to the user and/or log them for later analysis. You can use the SNDPGMMSG command to send a message to the program message queue or display it on the user’s screen.
The SNDBRKMSG command can be used to send a break message, stopping program execution.
Error and message handling in CL programming are vital for creating robust applications on the IBM i platform. By effectively handling errors and providing informative messages, you can improve the reliability and maintainability of your programs.
In IBM i (formerly AS/400) CL programming, handling errors and messages is a critical part of ensuring the reliability and robustness of your programs. CL (Control Language) is a scripting language used on the IBM i platform to automate tasks and create programs.
MONMSG
In IBM i, the MONMSG (Monitor Message) command is used for error handling. It allows you to monitor for specific messages and take predefined actions when those messages are issued.
Below is an example and syntax with a detailed explanation of how to use MONMSG with an example:
Syntax of the MONMSG Command:
MONMSG MSGID (message-identifier) EXEC (command)
MSGID: This parameter specifies the message identifier you want to monitor for. we can specify a specific message identifier, a generic message identifier (using asterisks), or a message file name.
EXEC: This parameter specifies the command to execute if the monitored message is received. we can execute various commands, including GOTO, SNDPGMMSG or a custom program.
Example:
Let us think we have a CL (Control Language) program that processes files, and we want to handle any potential errors gracefully. we can use MONMSG like this:
PGM
/* Attempt to open a file */
OVRDBF FILE(INPUT) TOFILE(MYLIB/MYFILE) OVRSCOPE(*JOB)
/* Monitor for any file open errors */
MONMSG MSGID(CPF502B) EXEC(DO)
/* Display an error message */
SNDPGMMSG MSG('Error opening file MYLIB/MYFILE') +
MSGTYPE(*ESCAPE)
/* Handle the error, maybe by logging it and ending the program */
/* Add your error-handling logic here */
ENDDO
/* Continue processing if the file was opened successfully */
/* Add your file processing logic here */
/* Close the file */
DLTOVR FILE(INPUT)
ENDPGM
Description of example:
- The program attempts to open a file using the OVRDBF command.
- The MONMSG command is used to monitor for the specific message CPF502B, which is issued when there is an error opening a file. When this message is encountered, the program jumps to the DO block.
- Inside the DO block, an error message is sent using SNDPGMMSG, and we can add our custom error handling logic.
- After handling the error, we can either end the program or continue with additional processing logic.
- This is a basic example of how to use MONMSG for error handling in IBM i. we can customize it based on our specific requirements and the types of messages you want to monitor in your application.
SNDPGMMSG
In IBM i, the SNDPGMMSG command is used to send a program message to a user or message queue. It is commonly used for error handling and reporting. You can find below a detailed explanation of error handling using SNDPGMMSG with an example.
SNDPGMMSG Command Syntax:
SNDPGMMSG MSG(‘Your message text’) TOUSR(UserProfile) MSGTYPE(*DIAG) MSGDTA(‘Message data’)
MSG: Specifies the message text.
TOUSR: Specifies the user profile to send the message to.
MSGTYPE: Specifies the message type. Use *DIAG for diagnostic messages (common for errors).
MSGDTA: Specifies additional message data.
Example of Error Handling using SNDPGMMSG:
Let’s say, we have a CL program that performs some operations and needs to handle errors by sending messages. Here’s an example:
/* Sample CL Program */
/* Declare variables */
DCL VAR(&ERROR) TYPE(*LGL) VALUE(‘0’) /* Initialize error flag to false */
/* Perform some operations */
/* … */
/* Check for an error condition */
IF (&SOME_CONDITION) THEN
CHGVAR VAR(&ERROR) VALUE(‘1’) /* Set error flag to true */
SNDPGMMSG MSG(‘An error occurred’) TOUSR(USER123) MSGTYPE(*DIAG)
GOTO CMDLBL(ERROR_HANDLING)
ENDIF
/* More operations */
/* Error Handling Label */
ERROR_HANDLING:
IF (&ERROR *EQ ‘1’) THEN
SNDPGMMSG MSG(‘Error Handling: Processing stopped due to an error) TOUSR(USER123) MSGTYPE(*DIAG)
ENDPGM /* Terminate the program */
ENDIF
/* Program continues if no error */
ENDPGM /* End of program */
Description of example:
- We declare a variable &ERROR to track whether an error occurred (initially set to ‘0’ for false).
- After performing some operations, we check for a condition (&SOME_CONDITION) that could indicate an error. If the condition is met, we set &ERROR to ‘1’ and send a diagnostic message using SNDPGMMSG.
- We use a label (ERROR_HANDLING) to handle errors. If &ERROR is set to ‘1’, we send an error message and terminate the program.
- If no error occurred, the program continues its execution, and eventually, it ends gracefully.
- This is a simplified example, and in a real-world scenario, you would have more detailed error handling and possibly log messages to a message queue for further analysis. The SNDPGMMSG command is just one part of error handling on IBM I, and you can customize it further based on your specific needs.
Example:
Usages :
Error handling and message handling are essential aspects of CL programming on IBM i for the following reasons:
Program Reliability: Error handling helps ensure the reliability of your CL programs by allowing you to detect and respond to unexpected conditions or errors. This helps prevent program crashes or unexpected behaviour.
User Feedback: Message handling allows you to communicate with users or operators by sending messages. This can be used to provide feedback, instructions, or warnings, making your programs more user-friendly.
Diagnostic Information: Messages often contain diagnostic information that can be valuable for troubleshooting and debugging. When an error occurs, capturing and logging messages can aid in identifying the root cause.
Graceful Program Termination: Proper error handling ensures that a program terminates gracefully, releasing any acquired resources and cleaning up after itself. This is crucial for maintaining system stability.
Conditional Processing: By monitoring specific messages (e.g., CPF messages for errors), you can implement conditional processing in your CL programs. For example, you might want to take different actions depending on the type of error encountered.
Logging and Auditing: You can log messages to keep a record of program activities, errors, or significant events. This log can be useful for auditing and tracking program behaviour over time.
Interaction with Other Programs: CL programs often interact with other programs or processes. Proper error handling ensures that the calling program or process can respond appropriately to errors raised by the called program.
In summary, error handling and message handling in CL programming on IBM i are crucial for ensuring program reliability, providing feedback to users, diagnosing issues, and maintaining overall system stability.
They enable you to create robust and user-friendly applications on this platform.
Restrictions :
Limited Error Information: CL programs primarily handle messages, and the information provided in messages may be limited. To access more detailed error information, you may need to rely on APIs or interact with other system components.
Message Queue Limitations: Messages are typically sent to message queues, and there may be limitations on the number of messages that can be held in a queue. If the queue becomes full, new messages may be lost.
Message IDs: When using the MONMSG command to monitor for specific messages, you need to know the message IDs in advance. If IBM i introduces new message IDs in future releases, your monitoring may need updates.
Resource Locking: Error handling should be cautious when dealing with resource locks, as improper handling can lead to resource contention issues.
Compatibility Considerations :
IBM I Versions: Error handling and message handling techniques in CL programming are generally consistent across different versions of IBM i, but there might be slight variations or enhancements in newer releases. It is a good practice to check the documentation specific to your IBM i version for any updates or changes.
Message Queue Types: IBM i supports different types of message queues, including program message queues and message queues associated with user profiles. The choice of message queue type can impact how messages are handled and accessed.
Message Queues in Subsystems: When working with subsystems, you need to consider how message queues are managed within the subsystem environment. Subsystem configurations can affect how messages are routed and monitored.
User Profile Settings: User profile settings, such as message queue authorities and message queue monitoring settings, can affect the behaviour of error handling and message handling in CL programs.
Library Lists: Ensure that any message files or message descriptions used in your CL programs are accessible through the library list of the job running the program.
Message File Changes: If you update or change message files or message descriptions, be aware of the potential impact on your CL programs that rely on those messages.
Message Text Language: Consider the language settings of message descriptions and message files. Messages may be presented in different languages based on user or system preferences.
It is important to keep these restrictions and compatibility considerations in mind when designing and maintaining error handling and message handling in IBM i CL programs. Staying informed about system updates and best practices is crucial for effective error and message management.
OVRDBF and OPNQRYF
Introduction
In IBM i CL (Control Language) programming, the “override” concept is used in the context of overriding certain system values and commands temporarily.
The format of this command is:
OVRDBF FILE(overridden-file-name) + TOFILE(library name/database file name)+ MBR(member name) + POSITION(file positioning option) + SECURE(secure from previous override) + SHARE(open data path sharing option) + OVERSCOPE(file override scope)
Below are the key points regarding the use of “override” in CL programming on the AS/400 platform:
- Override Commands: CL programs often use the “OVRDBF” (Override with Database File) command to temporarily change the behavior of a database file. This allows you to use a different file or record format within a program without permanently altering the file’s attributes.
- Override Database Files: “OVRDBF” is commonly used to change the file, library, or member used in a program. For example, you can override a file to work with a specific customer’s data within the same program.
- Override Control Language Defaults: The “OVRPRTF” (Override with Printer File) command is used to change the default attributes of printer files, such as page size, character set, and more, for a specific output operation within a CL program.
- Scope: Overrides in CL programming typically have a local scope, meaning they affect only the specific instance of a command or operation within the program. Once the program finishes its execution, these overrides do not persist.
- Temporary Changes: Overrides provide a way to make temporary changes to the behavior of your CL program without altering system-wide settings. This is especially useful when you need to customize program behavior for specific scenarios.
- Nesting Overrides: You can nest overrides within CL programs. For example, you can override a file within a sub-procedure, and the override will be in effect only for the duration of that sub-procedure.
- Resetting Overrides: It’s important to remember that overrides are temporary. To revert to the default settings, you may need to use the “DLTOVR” (Delete Override) command or simply let the program finish its execution.
Syntax is fixed/free.
In IBM i Control Language (CL), you can use both fixed-format and free-format syntax for the `OVRDBF` (Override Database File) command, just like with other CL commands. Below, we will provide examples of the `OVRDBF` command in both fixed and free formats.
Fixed-Format Syntax:
PGM OVRDBF FILE(MYLIB/MYFILE) TOFILE(MYLIB/MYFILE2) MBR(MEMBER2) /* Other CL commands */ DLTOVR FILE(MYLIB/MYFILE) /* Delete the override */ ENDPGM
In fixed-format CL, you typically start each statement at a specific column and follow a specific structure. The `OVRDBF` command starts at column 6, followed by its parameters. Columns 1-5 are reserved for sequence numbers. The `DLTOVR` command is used to delete the override.
Free-Format Syntax:
PGM OVRDBF FILE(MYLIB/MYFILE) TOFILE(MYLIB/MYFILE2) MBR(MEMBER2) /* Other CL commands */ DLTOVR FILE(MYLIB/MYFILE) /* Delete the override */ ENDPGM
In free-format CL, you have more flexibility in terms of layout and indentation. Statements can start at any position within a line, making the code easier to read. The above example accomplishes the same tasks as the fixed-format example but uses a more modern, flexible syntax.
You can choose the format that best suits your coding style and project requirements, but keep in mind that modern IBM i systems generally support free-format CL for increased readability and maintainability.
Usage
A basic example of how to use the `OVRDBF` command to override file attributes in a CL (Control Language) program:
OVRDBF FILE(MYLIB/MYFILE) TOFILE(MYLIB/MYFILE2) MBR(MEMBER2)
In this example, the `OVRDBF` command is used to override the file attributes for `MYLIB/MYFILE`. It specifies that the program should access `MYLIB/MYFILE2` instead of the default file, and it should use `MEMBER2` as the file member.
NOTE: Overriding file attributes should be used with caution, as it can impact the behavior of programs and jobs. It’s important to document and manage these overrides carefully to avoid unexpected issues. Additionally, you need the appropriate authority to perform file overrides on IBM i.
Example:
Here’s a simple example in IBM i CL programming that demonstrates how to use the “OVRDBF” command to temporarily override a database file:
PGM /* Declare variables */ DCLF FILE(MYLIB/CUSTOMER) OPNID(CUSTFILE) /* Declare file */ /* Override the database file to use a different me mber */ OVRDBF FILE(CUSTFILE) TOFILE(MYLIB/CUSTOMER) MBR(NEWMBR) /* Your program logic here */ /* You can now use the CUSTOMER file with the NEWMBR member */ /* Close the overridden file */ CLOF OPNID(CUSTFILE) /* Delete the override */ DLTOVR FILE(CUSTFILE) ENDPGM
In this example:
- We declare a file using the `DCLF` command, specifying the library (MYLIB) and file (CUSTOMER) that we want to work with. We also give it an open identifier (OPNID) of CUSTFILE.
- We use the “OVRDBF” command to override the CUSTOMER file temporarily. We specify that we want to use a different member (NEWMBR) within the same file. This override will only affect the file operations within the program.
- You can perform your program logic using the overridden file. Any database file operations within the program will use the specified member (NEWMBR) instead of the default.
- After you’ve finished using the overridden file, you close it using the `CLOF` command.
- Finally, we delete the override with the `DLTOVR` command, ensuring that the change doesn’t affect subsequent programs or system-wide operations.
This example demonstrates how to temporarily override a database file member within a CL program, allowing you to work with different data within the same file without permanently changing the file’s attributes.
I’ll include another example of a CL program with expected output:
PGM /* Declare variables */ DCLF FILE(MYLIB/CUSTOMER) OPNID(CUSTFILE) /* Declare file */ DCL &CUSTID *CHAR 5 DCL &CUSTNAME *CHAR 30 /* Override the database file to use a different member */ OVRDBF FILE(CUSTFILE) TOFILE(MYLIB/CUSTOMER) MBR(NEWMEMBER) /* Open the overridden file */ OPNDBF FILE(CUSTFILE) /* Read customer data from the overridden file */ RCVF OPNID(CUSTFILE) MONMSG MSGID(CPF0864) EXEC(GOTO EOF) /* Process customer data */ CHGVAR &CUSTID %SST(CUSTREC 1 5) CHGVAR &CUSTNAME %SST(CUSTREC 6 30) SNDPGMMSG MSG('Customer ID: ' || &CUSTID) SNDPGMMSG MSG('Customer Name: ' || &CUSTNAME) GOTO READFILE /* End of file reached */ EOF: /* Close the overridden file */ CLOF OPNID(CUSTFILE) /* Delete the override */ DLTOVR FILE(CUSTFILE) ENDPGM
In this example, we assume a CUSTOMER file in the MYLIB library with a member named NEWMEMBER(member will be created by CRTMBR). This program opens the file, overrides it to use the NEWMEMBER, retrieves and processes customer records, and then closes and deletes the override.
Expected Output:
Customer ID: 00123 Customer Name: Generic Text Customer ID: 00456 Customer Name: Customer01
This example demonstrates the following steps:
- The program declares variables for customer ID and name.
- It uses the “OVRDBF” command to override the CUSTOMER file, specifying the NEWMEMBER as the member to use.
- The program opens the overridden file using “OPNDBF.”
- It enters a loop to read and process customer records until the end of the file is reached.
- Inside the loop, it extracts customer IDs and names from the record and sends them as messages.
- When the end of the file is reached, it closes the file and deletes the override.
This program temporarily overrides the database file, reads data from the specified member, and processes it, producing the expected output.
Open Query File (OPNQRYF)
Introduction
OPNQRYF command opens a database file that satisfies the database query request. It creates a temporary access path (ODP – Open Data Path) & this access path contains the information needed to select, order, group and join the records. Once the access path is created, we can read the record from the file using normal CL commands. The access path is discarded after its use.
ODP – Access path describes the order in which records are to be read. It can be kept on the system permanently (such as physical or logical file) or temporarily (OPNQRYF). OPNQRYF command creates a temporary access path for one time use, and then discarded. The open data path contains the information like file name, format name, current record pointer, record selection information etc.
Parameters of OPNQRYF:
- FILE – It specifies the name of the file to be processed.
- OPTION – It allows to specify various options for how the query should be processed.
- FORMAT – It specifies the record format used for records. We can define which field to include in output.
- QRYSLT – It specifies the selection criteria for the records to be processed.
- KEYFLD – It specifies the fields to be used to key the records.
- IGNDECERR – It specifies whether to ignore decimal errors.
- COMMIT – Specifies whether to commit the changes to the query file after each record is processed.
- OPNSCOPE – Specifies the scope of the query file.
- DUPKEYCHK – Specifies whether to check for duplicate keys in the query file.
- ALWCPYDTA – Specifies whether the database is allowed to copy data when processing query.
- OPTIMIZE – It specifies whether the query is to be optimized. It can also be used to control level of optimization.
Parameters of OPNQRYF command with SQL equivalents:
OPNQRYF parameter | SQL clause equivalent | Example |
---|---|---|
FILE | From | Select * from EMPPF; |
QRYSLT | Where | Select * from EMPPF where field1 =” value”; |
KEYFLD | Order By | Select * from EMPPF Order By field1 ASC; |
MAPFLD | As | Select field1 AS FLD1 from EMPPF; |
JFLD | Join | Select * from EMPPF Inner Join CUSTPF ON EMPPF.field1 = CUSTPF.field2; |
GRPFLD | Group By | Select * from EMPPF Group by Field1; |
Syntax:
OPNQRYF FILE (lib name/file name
Member-name
Record-format-name)
OPTION (open-option)
FORMAT (lib name/database file name
Record-format name)
QRYSLT (query selection)
KEYFLD (field name)
IGNDECERR(*YES/*NO)
COMMIT(*YES/*NO)
OPNSCOPE (*File name/*USR / ALL)
DUPKEYCHK(*YES/NO/SAME/MSG/UNIQUE)
ALWCPYDTA(*YES/NO)
OPTIMIZE(*NO/YES/MIN/MAX)
Example-
In above example:
- ‘File (WELCOME24/EMPPF)’: specifies the file to be query.
- ‘QRYSLT’: defines the query selection criteria.
- ‘FORMAT’: specifies the output format for the result.
- ‘KEYFLD’: key fields for sorting & selecting unique records.
- ‘IGNDECERR’: Ignores decimal data errors during query processing.
- ‘COMMIT’: defines commitment control behaviour.
- ‘OPNSCOPE’: specifies the scope of open query.
- ‘DUPKEYCHK’: Enables or disable duplicate key checking.
- ‘ALWCPYDTA’: Allows or disallows copying data to temporary file.
- ‘OPTIMIZE’: specifies the optimization level for query processing.
- ‘OPTION’: Sets additional processing options.
OVRDBF – Override Database File (OVRDBF) command is used to change the file named in the program, or certain parameters of a file that are used by the program. All overrides (changes) are temporary and are effective until the override command has been in scope.
Parameters of OVRDBF:
- FILE – It include name of the file to be override (change).
- TOFILE – It include the name of the file to be used in place of the override file.
- MBR – it specifies the member used within the file.
- POSITION – It denotes the position of the cursor in override file.
- SHARE – It specifies whether the override file can be shared with other programs.
Note – OVRDBF is only used to share the open data path within the calling program (By specifying SHARE(*YES)). But if we don’t use the OVRDBF command then open data path will not be shared.
Using OVRDBF command.
Without Using OVRDBF Command.
Usage:
It can be used to open a file to a particular set of records as per the query request.
It can be used for Ordering & Grouping the Records.
It is also used for Joining records from multiple records.
Restrictions & Compatibility:
Restrictions-
- Temporary Nature: Open Query File are temporary files, and their data is only available for the duration of session. They are typically cleared when the session is ended, so we need to save or export the data if we want to keep it for future reference.
- Limited Storage: Open Query File are stored in a temporary library, and there is a limit to the amount of data that can be stored in this library. If query generates a large result set, we may encounter storage limitations.
- Read-Only: OPNQRYF is primarily used for reading data. We cannot use it to update, insert, or delete records in a database file.
- Performance: It may not be as efficient as SQL for certain types of queries.
- Complexity: It becomes complex when dealing with multiple file joins, complex conditions it make challenging to write & maintain query.
Compatibility-
- Query Syntax: Compatibility depends on the specific query for which we are trying to execute and whether it uses features that are supported by the system.
- Library and Object Names: Compatibility can be affected by the naming conventions used for libraries and objects in our system.
Examples:
- Basic Query
- Joining Files
- Sorting Results
- Creating a Temporary Result Set
ILE with CL using Procedures
Introduction
In CLLE, subroutines and procedures are essential programming constructs used to organize and modularize code for better maintainability and reusability. Subroutines and procedures in CLLE provide a way to organize code, promote code reuse and improve the overall structure of the program. Let’s know about these concepts briefly below.
Subroutines
A subroutine is a self-contained section of code within a CLLE program that is defined to perform a specific task or set of related tasks. It has a unique name that allows the program to call and execute the subroutine as needed. This helps the developer of the CLLE program to reuse the code as many times as they need in their code without having to rewrite the same code for the functionality again and again.
Subroutines in CLLE
The Subroutine (SUBR) command is used in a CL program or procedure, along with the End Subroutine (ENDSUBR) command, to delimit the group of commands that define a subroutine. Let’s look at these commands one by one along with the two other important commands which are widely used when using subroutines in CLLE.
- SUBR: The SUBR command is used in the CLLE program to mark the beginning of the subroutine block. It has the following syntax:
SUBR SUBR(Subroutine_Name)
- ENDSUBR: The ENDSUBR command is used in the CLLE program to mark the termination of the subroutine block that began with the SUBR command.
- RTNSUBR: The optional RTNSUBR command is used to return a value and exit the subroutine that has been called.
RTNSUBR RTNVAL(INTVALUE)
- CALLSUBR: The CALLSUBR command can be used anywhere in the CLLE program to call the subroutine block and executes the code written inside it. The CALLSUBR command has the following syntax:
CALLSUBR SUBR(Subroutine_Name)
Subroutines are physically located in your source code following your main program logic and prior to the ENDPGM command. You can have many subroutines within your program, each delimited by paired SUBR and ENDSUBR commands.
The SUBR command has one parameter, SUBR, which is used to name the subroutine. This name is then used with the SUBR parameter of the CALLSUBR command to identify which subroutine is to be run. The ENDSUBR command defines the end of the common logic that started with the previous SUBR command. The ENDSUBR command has one parameter, RTNVAL, which can be used to specify a return value that can be returned to the caller of the subroutine.
The CALLSUBR command has two parameters: SUBR, which identifies the subroutine to call, and RTNVAL, which can optionally identify a CL variable to receive the return value of the called subroutine. There is also a Return from subroutine command, RTNSUBR, which can be used to immediately return control to the calling CALLSUBR command without having to run the ENDSUBR command. The RTNSUBR command also has one parameter, RTNVAL, which, like ENDSUBR, allows you to identify a return value to be returned to the calling CALLSUBR.
Example of CLLE program:
Let’s look at the below example of the CLLE program that uses a subroutine inside it.
CLLE Snippet showing subroutine.
In the above CLLE source snippet we have used the subroutine to perform a basic arithmetic addition operation on two numbers namely &NUM1 and &NUM2 and we are storing the result of the two numbers into &SUM.
In the above example we have used the CALLSUBR command to call the subroutine ADDSUBR. This subroutine is defined between the lines 011.00 and 012.00 as shown above. Inside the subroutine definition we have written the logic of the sum using the CHGVAR command.
We can use this subroutine as many times as we want and hence it increases the reusability of the code written inside the subroutine definition.
Procedures
In Integrated Language Environment, procedures are modular units of code that encapsulate a set of operations or logic ILE is an architectural framework used in IBM’s AS/400 and IBM i series systems allowing for integration of different programming languages like RPG. The procedure definition involves specifying the details and structure of a procedure in programming language and its fundamental step in creating a modular and usable piece of code that can be called or invoked from other parts of a programs.
The sub procedures are just like functions that we use in modern programming languages. There is a difference between subroutines and procedures as follows:
- You can pass parameters to a subprocedure, even passing by value.
- The parameters passed to a subprocedure and those received by it are checked at compile time for consistency. This helps to reduce run-time errors, which can be more costly.
- Names defined in a subprocedure are not visible outside the subprocedure.
- You can call subprocedures recursively.
- You can call the subprocedure from outside the module, if it is exported.
Procedures in CLLE
A CL procedure is a group of CL commands that tells the system where to get input, how to process it, and where to place the results. The procedure is assigned a name by which it can then be called by other procedures or bound into a program and called. As with other kinds of procedures, you must enter CL procedure source statements, compile, and bind them before you can run the procedure.
CL procedures can be written for many purposes, including:
- To control the sequence of processing and calling of other programs or procedures.
- To display a menu and run commands based on options selected from that menu. This makes the workstation user’s job easier and reduces errors.
- To read a database file.
- To handle error conditions issued from commands, programs, or procedures, by monitoring for specific messages.
- To control the operation of an application by establishing variables used in the application, such as date, time, and external indicators.
- To provide predefined functions for the system operator, such as starting a subsystem or saving files. This reduces the number of commands the operator uses regularly, and it ensures that system operations are performed consistently.
In CLLE, the CALL command is used to call the program whereas to call the procedure we use the CALLPRC command.
Example of CLLE program:
Let’s look at the below example of the CLLE program that uses a sub-procedure inside it.
CLLE Snippet showing sub-procedure:
The above CLLE program as we can see, uses the CALLPRC command to call the external procedure named MERGPROC with the VALUE1 and VALUE2 as the parameters.
According to the above CLLE snippet the parameters passed in the CALLPRC command would have the following values:
VALUE1 = ROHAN
VALUE2 = SINGH
Let’s check the source of the module containing the MERGPROC:-
As we can see above the logic of the concatenation of the values passed from the CLLE program has been written in the source. This procedure returns the concatenated value at the end of program to the CLLE program we saw above and displays the output as shown:
Data Structure
Introduction
Data structures are a powerful tool that can make your CL programs more efficient and easier to read and maintain. By using data structures, you can group related data together and reduce the amount of code that you need to write.
Data structures in CL on the AS400 can be created using defined variables. A defined variable is a variable that is based on a portion of another variable. This allows you to group related data together and to reference it more easily. To create a defined variable, you use the DCL VAR command with the STG(*DEFINED) and DEFVAR parameters.
The term STG(*DEFINED) means that this variable is another name for part of another variable. The DEFVAR parameter tells which variable this one is based on and the position at which this variable overlays the other.
Here some of the usages of using data structures in CL:
Improved efficiency:Data structures can help to improve the efficiency of your CL programs by reducing the amount of code that you need to write and by making it easier to access and manipulate data.
Increased readability and maintainability: Data structures can help to make your CL programs more readable and maintainable by grouping related data together and by giving your data meaningful names.
Reduced errors: Data structures can help to reduce errors in your CL programs by providing a way to validate data before it is used.
Restrictions-:
- CLLE data structures are limited to 64 KB in size.
- Arrays and pointers are not permitted in CLLE data structures.
- Members that are other data structures are not permitted in CLLE data structures.
- Nested structures are not permitted in CLLE data structures.
- Variable-length members cannot be found in CLLE data structures.
Compatibility -:
Field Data Types: Ensure that the data types of fields within a data structure are compatible with the data you intend to store in them. IBM i supports various data types, including character, numeric, date, and time types.
Compatibility with Embedded SQL:
If you use embedded SQL in your CLLE programs, the data structures you define should align with the structure of the database tables you are interacting with. Field names in your data structures should match the column names in SQL statements for proper binding.
Naming Conventions:
Follow consistent naming conventions for your data structures and fields. This helps maintain code readability and makes it easier to work with other developers’ code.
Performance Optimization:
Depending on your specific application and performance requirements, you may need to optimize your data structures for efficient access and processing.
Examples
The following example shows how to create a defined variable for an employee record:
DCL VAR(&EMPREC) TYPE(*CHAR) LEN(77)
DCL VAR(&EMPNUM) TYPE(*CHAR) LEN(6) STG(*DEFINED) DEFVAR(&EMPREC 1)
DCL VAR(&EMPNAME) TYPE(*CHAR) LEN(20) STG(*DEFINED) DEFVAR(&EMPREC 7)
This creates a defined variable called &EMPREC that is 77 characters long. The &EMPNUM and &EMPNAME variables are defined as being based on the first 6 and 20 characters of &EMPREC, respectively.
Once you have defined your defined variables, you can use them like any other variable in your CL program. For example, the following code shows how to print the employee’s name and number to the console:
DSPMSG MSG(&EMPNAME)
DSPMSG MSG(&EMPNUM)
Defined variables can be used to create complex data structures, such as arrays and linked lists. They can also be used to pass data between CL programs and other programs, such as RPG programs.
Here is an example of how to use a defined variable to pass data to an RPG program:
DCL VAR(&EMPREC) TYPE(*CHAR) LEN(77)
DCL VAR(&EMPNUM) TYPE(*CHAR) LEN(6) STG(*DEFINED) DEFVAR(&CUSTREC 1)
DCL VAR(&EMPNAME) TYPE(*CHAR) LEN(20) STG(*DEFINED) DEFVAR(&CUSTREC 7)
Assign values to the defined variables.
CALLP RPGPGM
PARM(&EMPREC)
Code example:
In the above code:
DCL declares a data structure named DATASTRUCT that is 50 characters long and containing multiple fields &FLD1, &FLD2, &FLD3, &FLD4 of different data types and are defined as being based on the first 10 and 31 characters of &DATASTRUCT, respectively.
STG(*DEFINED) – Storage: the value for this variable is specified in the variable defined in the DEFVAR parameter.
DEFVAR(&DATA_STRCT ?) – Defined on variable specifies the variable that contains this subfield, and its starting position.
Handling Data Areas
- Understanding Data AreasData areas are named permanent storage locations in the IBM i system that can hold several types of data, including character, numeric, or logical values. They are usually defined in a library and are identified by a unique name. The purpose of this storage is to pass information within multiple jobs.The main uses of data area can be:
- To store job information that is needed to run a group of jobs simultaneously.
- They are used in auto-generation of numbers e.g., next account no. generation, next invoice no. generation, next order no. generation etc.
- Creating a Data AreaIn CL programming, you can create a data area using the `CRTDTAARA` command.Like:
CRTDTAARA DTAARA(mylib/mydata) TYPE(*CHAR) LEN(10) VALUE('InitialValue')
Example:
- Reading Data from a Data Area With the RTVDTAARA (Retrieve Data Area) command in CL, you can retrieve data from a data area.Example:
The Display Data Area (DSPDTAARA) command displays the attributes and value of the specified data area. The following attributes are displayed: the type and length of the data area, the library where the data area is located (there is no library associated with a local data area, the group data area, or the program initialization parameter data area), and the text describing the data area.
Example:
- Writing Data to a Data AreaTo write data to a data area, you can use the `CHGDTAARA` (Change Data Area) command.To change the full Information,
like:CHGDTAARA DTAARA(mylib/mydata) VALUE('NewValue')
Example:
To change the partial Information, like:
CHGDTAARA DTAARA(mylib/mydata (Substring starting position Substring length)) VALUE('NewValue')
- Handling Data Area ErrorsHandling data area failures with the MONMSG command is a typical solution in CL programming. When error circumstances are encountered, MONMSG is used to monitor them and execute error-handling methods, allowing for the graceful handling of exceptions connected to data areas.Example:
- Synchronizing Data AccessIn situations where multiple programs are accessing the same data area simultaneously, you should use synchronization mechanisms, such as locking, to maintain data integrity.
- Cleaning Up Data AreasWhen you no longer need a data area, you can use the `DLTDTAARA` (Delete Data Area) command to remove it from the system.
Like,
DLTDTAARA DTAARA(mylib/mydata)
Example:
Limitations of Data Areas
Handling data areas from CL (Control Language) programming in RPGLE (Report Program Generator) has certain limitations and considerations that you should be aware of to use them effectively. Here are some important restrictions:
- Data Area Size Limitations: The maximum length specified during creation limits the size of data areas. The maximum length depends on the data area type (e.g., *CHAR, *DEC, *LGL ). For instance, a character data area (*CHAR) typically has a maximum length of 2000 characters.
- Limited Data Types: A CL data area can contain only certain types of data, such as character (*CHAR), decimal (*DEC), and logical (*LGL). Besides integer and float types, other data types commonly used in RPGLE are not currently supported.
- No Arrays or Record Formats: There is no support for array or record format structures in data areas. Depending on the type and length of data, they can only hold a single value.
- Limited Error Handling: Error handling in CL is limited compared to RPGLE. MONMSG can be used to monitor for errors during data area operations, but its error handling is less granular than RPGLE’s.
Handling Data Queues
Data Queues(*DTAQ) are the objects useful to carry out data transfer within a job or between multiple jobs. Once a data Queue is created, it can be utilized to send and receive data multiple times asynchronously.
Data Queues are mainly of 3 types:
- 1. Standard Data Queue(*STD): These data queues can mainly transfer data between different jobs and programs in a single IBM i system.
- 2. Distributed data management Data Queue(*DDM): These data queues are created for the scenarios when communication between two different IBM i systems is required.
- 3. Display Data Queue(*DSP): If a program wants to access data from a data queue while using display files then this data queue can be created. To make use of these data queues, we can specify the name of the data queue in the DTAQ parameter while creating a display file.
The sequence of storing and retrieving entries on a data queue is as follows:
- *FIFO (First-in, first-out): The entry sent first to the data queue will be retrieved first.
- *LIFO (First-in, first-out): The entry sent last to the data queue will be retrieved first.
- *KEYED: The sequence of retrieving the entries will depend on the key value.
Usage
There are multiple commands and APIs available in IBM i to create, delete and work with the data queues. Using Data Queue enhances the overall performance of an interactive program by reducing the response time.
Useful Commands
Commands available in IBM i to handle Data Queues:
- CRTDTAQ (Create Data Queue): This command will create a Data Queue object in particular library.
Command Syntax for *STD data queue:
CRTDTAQ DTAQ(*CURLIB/PIOSTDDTAQ)
TYPE(*STD)
MAXLEN(1000)
SEQ(*FIFO)
TEXT(‘STD DATA QUEUE')
Command Syntax for *DDM data queue:
CRTDTAQ DTAQ(*CURLIB/ PIODDMDTAQ)
TYPE(*DDM)
RMTDTAQ(LIBNAME/RMTDTAQ1)
RMTLOCNAME(*RDB)
RDB(RDBNAME)
TEXT(‘DDM DATA QUEUE')
Command Syntax for *DSP data queue:
CRTDTAQ DTAQ(*CURLIB/ PIODSPDTAQ)
TYPE(*DSP)
MAXLEN(100)
TEXT(‘DSP DATA QUEUE')
Important parameters of this command are as follows:
- 1. DTAQ:
Data queue: Describes the name of the data queue object to be created.
Library: Describes the library name in which the data queue object will be created. Its default value is *CURLIB (The current library of the job).
CALL PGM(QSNDDTAQ): This command will call an API ‘QSNDDTAQ’, through which we can send data to a data queue.
Command Syntax:
CALL PGM(QSNDDTAQ) PARM(<dataQueue> <Library> <LengthOfData> <Data>)
Example-
CALL PGM(QSNDDTAQ) PARM(‘PIODTAQ’ ‘PIOLIB’ ‘10’ ‘TEST DATA QUEUE’)
Important parameters of this command are as follows:
- 1. Data Queue: Describes the name of the data queue object to which data will be sent. This parameter data type is Char, and the length is 10.
- 2. Library: Describes the library name in which the data queue object is present. This parameter data type is Char, and the length is 10.
- 3. Length: Describes the length of the data to be sent. This parameter data type is Packed and length 5,0.
- 4. Data: Describes the data to be sent to the data queue. This parameter data type is Char.
CALL PGM(QRCVDTAQ): This command will call an API ‘QRCVDTAQ’, through which we can receive data to a data queue.
Command Syntax:
CALL PGM(QRCVDTAQ) PARM(<dataQueue> <Library> <Length> <Data> <waitTime>)
Example:
CALL PGM(QRCVDTAQ) PARM(‘PIODTAQ’ ‘PIOLIB’ ‘10’ ‘’ ‘0’)
Important parameters of this command are as follows:
- 1. Data Queue: Describes the name of the data queue object from which data will be received. This parameter data type is Char, and the length is 10.
- 2. Library: Describes the library name in which the data queue object is present. This parameter data type is Char, and the length is 10.
- 3. Length: Describes the length of the data to be received. This parameter data type is Packed, and the length is 5,0.
- 4. Data: This parameter will receive the data. Its data type is Char.
- 5. WaitTime: Describes the delay to be made while receiving data from a data queue. This parameter data type is Packed, and the length is 5,0.
Note: If WaitTime is -1, the data will be received from data queue as soon as it enters the data queue.
- DLTDTAQ (Delete Data Queue): This command will delete the Data Queue object present in particular library.
Command Syntax:
DLTDTAQ DTAQ(PIOLIB/PIODTAQ)
Restrictions
- – The maximum value for the MAXLEN parameter can be 64512.
- – Data queues can send and receive data of Character type only.
- – MAXLEN & SEQ parameter is not valid while creating a data queue of type *DDM.
- – A *DSP data queue cannot be created with the sequence as *KEYED.
Code Example:
The above code example is a CLLE program that creates a data queue of type *STD & sequence as *FIFO.
Then ‘QSNDDTAQ’ API is called to send the data to the PIODTAQ data queue.
We can send data to a single data queue multiple time. Data from the VAR1 variable is sent first & then data from the VAR2 variable is sent.
The second code example is a CLLE program that will receive the data from the data queue by calling the ‘QRCVDTAQ’ API.
When the ‘QRCVDTAQ’ API is called for the first time in the above program, data sent from VAR1 will be retrieved first & the same will be displayed while executing the SNDUSRMSG command for the first time.
When the ‘QRCVDTAQ’ API is called for the second time in the above program, data sent after VAR1, i.e. VAR2 will be retrieved & the same will be displayed while executing the SNDUSRMSG command for the second time.
This is because PIODTAQ will retrieve data in *FIFO sequence.
Built-in Function
%CHAR
The %CHAR built-in function in CL is used to convert logical, decimal, integer, or unsigned integer data into character format.
SYNTAX:
- Convert-argument: The convert-argumentis a CL variable with the type of *LGL, *DEC, *INT, or *UINT.
EXAMPLE:
- %CHAR built-in function converted the &NUM1, &LEN, &SIZE, and & LGLVAR variable, to character type, which has type *DEC, *UINT, *INT, and *LGL respectively.
- The &RESULT variable having *CHAR type will store the result in character format.
%DEC
%DEC built-in function is used to convert character, logical, decimal, integer, or unsigned integer data into packed decimal format.
SYNTAX:
- Convert-argument: The convert-argument is a CL variable with the type of *CHAR, *LGL, *DEC, *INT, or *UINT.
- Total-digits & decimal-places: total-digits and decimal-places parameters are optional. They will take default values, Based on data type.
EXAMPLE:
- %DEC built-in function converted the &STR1, &LGLVAR, &NUM1, and &NUM2 variable , to decimal type, which has type *CHAR, *LGL, *INT, and *UINT respectively.
- The &RESULT variable having *DEC type will store the result in packed decimal format.
%INT
%INT built-in function is used to convert character, logical, decimal, or unsigned integer data to integer format.
SYNTAX:
- Convert-argument: The convert-argument is a CL variable with the type of *CHAR, *LGL, *DEC or *UINT.
EXAMPLE:
- %INT built-in function converted the &STR1, &LGLVAR, &NUM1, and &NUM2 variable, to integer type, which has type *CHAR, *LGL, *DEC, and *UINT respectively.
- The &RESULT variable having *INT type will store the result in integer format.
%LEN
%LEN built-in function is used to return the length of the numeric or character variable.
SYNTAX:
- Variable-argument: The variable-argument is a CL variable with the type of *CHAR, *DEC, *INT, or *UINT.
- If length is not defined for the numeric and character variable it will return the default length.
EXAMPLE:
- The length of variables &STR1, &STR2, &NUM1, &NUM2, &NUM3, and &NUM4 will be 32, 30, 16, 6, 10, and 5 respectively stored in &LEN variable.
- The value 5 will be returned for a 2-byte *INT or *UINT variable. And value 10 will be returned for a 4-byte *INT or *UINT.
%LOWER
%LOWER built-in function returns a character string of the same length as the argument passed in, but with every uppercase letter changed to its corresponding lowercase letter.
SYNTAX:
- input-string: input-string is a CL variable with the type of *CHAR.
- The CCSID parameter is optional and defaults to the job CCSID.
EXAMPLE:
&STR1 variable contains the value “HI THERE” in uppercase letters. %LOWER built-in function will return the value “hi there” in lowercase letters.
%UPPER
%UPPER built-in function returns a character string of the same length as the argument passed in, but with every lowercase letter changed to its corresponding uppercase letter.
SYNTAX:
- input-string: input-string is a CL variable with the type of *CHAR.
- The CCSID parameter is optional and defaults to the job CCSID.
EXAMPLE:
&STR1 variable contains the value “hi there” in lowercase letters. %UPPER built-in function will return the value “HI THERE” in uppercase letters.
%PARMS
%PARMS built-in function is used to return the number of parameters that were passed to the program in which %PARMS is used.
SYNTAX:
EXAMPLE:
The following is the source for EMPDATA.
- Three parameters were passed to the program which was called by a program call.
- %PARMS built-in function will return the 3 parameters and “3 parms were passed” will be the result of program.
%CHECK:
%CHECK is used to find the first position of base string where a specific character is not available in the test string from left to right. A 0 is returned when all characters match. It is supported in arithmetic expressions and conditional statements.
SYNTAX:
- Comparator-String: it must be either a cl character variable or a character literal. The comparator string specifies the characters to search for in the base string.
- Base-String: It can be a cl character variable or *LDA. Base string refers to the string against which the comparison is made.
- Starting-position: It is optional and defaults to 1 and specifies where checking begins.
EXAMPLE:
- Look for the first character that isn’t an asterisk(*) or a dollar sign($). From left to right, the characters in the variable &AMOUNT are examined.
- The CHGVAR command assigns the value 8 to the cl variable &POS since the eighth character is the first one that is neither an asterisk nor a dollar sign.
%CHECKR:
%CHECKR is used to find the first position of base string where a specific character is not available in the test string from right to left. 0 is returned when all characters match. It is supported in arithmetic expressions and conditional statements.
SYNTAX:
EXAMPLE:
- &COM contains the comparator string (‘$* ‘).
- &AMT contains the base string (‘$***5.27 ‘).
- &SPOST stores the leftmost position where a character not in the comparator string appears.
- &EPOST stores the rightmost position where a character not in the comparator string appears.
- &LENT calculates the number of characters between these two positions.
- &DECS Extracts The Relevant Substring And Converts It To A Decimal Cl Variable.
%SCAN:
The %scan built-in function in cl is a powerful tool for string manipulation.
%scan gives back a search argument’s initial position in the source string.
The function returns 0 if the search argument cannot be found.
%scan can be used anywhere an arithmetic expression is supported by CL.
SYNTAX:
- Search Argument: a literal character or a CL character variable. It is the substring that you want to search for within a larger string.
- Source-String: *LDA or a CL character variable. The contents of the local data area for the job are scanned by the scan function when *LDA.
- It refers to the string that is searched for occurrences of a specified substring.
- Starting Position (Optional): this is the first location in the source string where the search starts by default.
EXAMPLE:
- A message is issued if the string “jonny” cannot be located in the variable &firstname.
- Because the scan is case sensitive, if &firstname contains the value “jonny,” a search for “Jonny” will not yield a positive result.
%SUBSTRING OR %SST:
You can work with character strings using CLLE’S built-in %substring (or %sst) function.
A character string generated by the %substring function is a subset of an already-existing character string.
SYNTAX:
Alternatively, it can be written as:
We can also use %SST with the special value *LDA to indicate that the substring function operates on the contents of the local data area.
- Character-variable-name: The name of the CL character variable or the special value *LDA. Character variable name refers to the name of the variable containing the string from which you want to extract a substring.
- Starting position: The position (which can be a variable name) where the substring begins (cannot be 0 or negative). Specifies the position within the source string from which the substring extraction begins.
- Length: The length (which can also be a variable name) of the substring (cannot be 0 or negative).
The program CUS210 is invoked if the initial two positions of &VAR1 and &VAR2 match.
%TRIM:
Leading and trailing characters can be omitted from character strings using the %trim. The %trim function serves two purposes:
Trim Leading and Trailing Blanks: This function removes leading and trailing blank spaces from a character string when it is used with a single parameter.
Custom Trim: This function removes expected leading and trailing characters specified in the second parameter when it is used with two parameters.
SYNTAX:
- Character-variable-name: Refers variable name that we want to trim.
- Character-to-trim: Refers characters that we want to trim.
- If the characters-to-trim parameter is specified, it must be either a cl character variable or a character literal.
- If, after trimming, no characters are left, the function produces a string of blank characters.
EXAMPLE:
- Eliminate leading and trailing whitespace characters.
- After trimming the leading and trailing blanks from the CL variables &FNAME and &LNAME, the remaining strings are concatenated with the *BCAT operator, leaving a single blank between the two values.
- Next, the concatenated string is allocated to the cl variable &sname
%TRIML:
The built-in function %triml is used to manipulate strings by eliminating only the leading characters.
SYNTAX:
- Character-variable-name: Refers variable name that we want to trim.
- Character-to-trim: Refers characters that we want to trim.
- If the characters-to-trim parameter is specified, it must be either a cl character variable or a character literal.
- If, after leading, no characters are left, the function produces a string of blank characters.
EXAMPLE:
- The letters that remain (6.37) are allocated to cl variable &TRIMMEDAMT, while all dollar signs and blanks are removed from the beginning of cl variable &AMT.
- Decimal variable &decvar receives the numeric value supplied to the character variable &TRIMMEDAMT.
%TRIMR:
The %trimr function removes any trailing characters (usually blank spaces) from a character string.
SYNTAX:
- Character-variable-name: Refers variable name that we want to trim.
- Character-to-trim: Refers characters that we want to trim.
- If the characters-to-trim parameter is specified, it must be either a cL character variable or a character literal.
- If, after trailing, no characters are left, the function produces a string of blank characters.
EXAMPLE:
- The CL variables &FNAME and &LNAME have their following blank characters removed, and the resulting strings are concatenated using the *cat operator.
- Next, the concatenated string is allocated to the CL variable &sname.
Report Program Generator (RPG)
Logic Cycle
Introduction
The RPG cycle simplifies the development process by automatically handling file operations, calculations, and report generation. Programmers can focus on business rules and logic without explicitly writing code for file handling and other repetitive tasks. The primary procedure goes through the RPG cycle, which is a set of sequential steps, for every record that is read.
A portion of an RPG program’s logic is provided by the RPG compiler. The program cycle, often known as the logic cycle or RPG cycle, is provided by the compiler for a cycle-main procedure.
The program cycle for a cycle-main procedure is provided by the RPG compiler.
The following steps are part of the program cycle:
- Implicit opening of files and locking of data areas
- Reading and processing of records
- Any calculations or report printing
- Implicit closing of files and unlocking of data areas
RPG Life Cycle Stages
There are multiple stages in the AS/400 RPG (Report Program Generator) logic cycle:
- Compile:A high-level language is used to write RPG programs, which are subsequently transformed into machine-readable code by a compiler. The program is converted into machine code and checked for syntax issues during compilation.
- Bind:The software must be bound to the relevant database files and other resources it interacts with after compilation. Program access to required files and data structures is guaranteed via binding.
- Activation: The RPG program goes through an activation process when a user starts it. This includes communicating with the database and assigning resources like RAM.
- Execution: Based on the input and program logic, the stated operations are carried out by the program’s logic. Calculations, report generation, and record processing are common tasks of RPG systems.
- Termination: The program goes through a phase of termination after its execution is finished. Any temporary storage is released, and resources are reallocated.
- Output: Reports and files are created either during or after the execution phase of the RPG program, if any. When the RPG software is run again, the cycle is repeated as necessary. The AS/400 system’s RPG programs run in an ordered manner thanks to this cycle.
Usage in the RPG life cycle
In the RPG life cycle context, the term “usage” refers to the many activities and reasons for which RPG (Report Program Generator) programs are used during their lives on AS/400 or IBMi systems. RPG programs are often employed at different points of their lifecycle, as shown below:
Development: RPG programs are generated during development to implement specific business logic or functionality. SEU (Source Entry Utility) and RDi (Rational Developer for i) are tools used by developers to write RPG code.
Testing: After development, RPG programs are tested to ensure they work as intended and meet the specifications. This entails several forms of testing, including unit testing, integration testing, and system testing.
Integration: RPG programs are integrated with other system components or third-party applications as required. This can include interacting with databases, phoning other programs or services, and transferring data with external systems.
Deployment: Once testing is completed and the programs are considered ready for production use, they are moved to production settings. This includes transferring compiled RPG objects (such as programs and modules) to the production system and making any necessary configuration adjustments.
Maintenance: RPG programs require maintenance throughout their lifecycle to address errors, adapt to changing business requirements, and increase performance or usefulness. Bug repairs, additions, and optimizations are all possible maintenance operations.
Monitoring and Support: Once deployed, RPG programs are monitored to ensure they run successfully in production situations. Support teams are responsible for resolving any difficulties that emerge and assisting users as needed.
Overall, RPG programs play an important role in the IBM I environment, acting as the foundation for many commercial applications that operate on these systems. Their use throughout the lifecycle enables the efficiency and ongoing improvement of the systems they support.
Restrictions in the RPG life cycle
Restrictions in the RPG life cycle might include:
Limited tooling: When compared to more current languages, RPG development tools may be limited, affecting development and maintenance efficiency.
Legacy codebase: Working with legacy RPG codebases can be difficult due to outdated coding methods and a lack of documentation.
Platform dependencies: RPG programs are often strongly connected with the IBM I platform, limiting portability and interoperability with other systems.
Skill availability: Because RPG programming is becoming less popular in comparison to more recent languages, finding developers with competence in this area may be difficult.
Performance constraints: Older RPG applications may not fully utilize the capabilities of newer hardware, resulting in potential performance bottlenecks.
Addressing these limits frequently requires a combination of modernization activities, such as reworking legacy code, adopting newer development tools, and educating or hiring engineers with competence in both RPG and current technologies.
Built-in Functions
%ABS:
The absolute value of the numeric expression specified as the parameter is returned by %ABS. The value is returned unchanged if the numeric expression’s value is non-negative.
To find the Absolute value (Positive value) of a numerical expression, use the %ABS function. When we want the expression results to be positive, we can use the %ABS () function.
Syntax: %ABS (numeric expression)
Example:
Result
%DIFF:
To find the difference between two date, time, or timestamp values, use the %DIFF function. The difference (duration) between two date or time data is generated by %DIFF.
The types of the first and second parameters must be the same or compatible.
The combinations mentioned below can be used to obtain the difference:
- Differences between the two dates
- Difference between the two times
- Difference between two timestamps
- Difference between Date and timestamp (only the time portion of the timestamp)
- Difference between Time and timestamp (only the time portion of the timestamp)
The unit of evaluating for the difference is indicated by the third parameter. The units mentioned below are Valid:
- Difference between two dates or between a timestamp and a date: *DAYS, *MONTHS, *YEARS.
- *SECONDS, *MINUTES, *HOURS, for two times, or a time and a timestamp.
- The *MSECONDS, *SECONDS, *MINUTES, *HOURS, *DAYS, *MONTHS, and *YEARS timestamps differ from each other.
Syntax:%DIFF(op1 : op2 : unit {: frac })
Example:
Result
%DIV:
The integer part of the quotient obtained by dividing operands n by m is returned by the function %DIV.
It is required that the two operands have decimal values with zero decimal places.
The result is packed numeric if the operand is either a zoned, packed, or binary numeric value. The result is an integer if either operand has an integer numeric value. If not, the outcome is an unsigned number.
Numerical operands that float are prohibited.
If the operands are constants that can fit in 8-byte integers or unsigned fields, constant folding is applied to the built-in function.
In this scenario, the definition specifications can be used to code the built-in %DIV function.
Syntax:%DIV(n:m)
Example:
Result
%EDITC:
Numerical values can be formatted with special characters such as asterisk (*), period (. ), comma (, ), cent sign (¢), pound sign (£), dollar sign ($), minus sign (-), credit sign (CR), etc.
using the %EDITC function. To produce a date format, it can also be used to suppress zeros or format numbers with a slash (/).
Real-world scenarios frequently need us to provide reports with amount fields that look like $12,345.67-, $12,345.67CR, or ‘***12345.67-‘ rather than showing the amount as -12,345.67.
We can use the %EDITC Function in such report production programs to generate the results we desire.
Syntax:%EDITC(numeric : editcode {: *ASTFILL | *CURSYM | currency-symbol})
Here, the input numeric value that we wish to change is the first parameter.
2nd parameter is the edit code option used to generate the required edited string.
Another format choice, the third parameter, is used to create the necessary manipulated string.
3rd parameter is an optional parameter.
Example:
Result
%EDITFLT:
The numeric expression’s value is converted to the float’s character external display representation by using %EDITFLT. Either 14 or 23 characters are the result.
The final result is 14 characters if the parameter is a 4-byte float field. If not, there are 23 characters.
When a definition specification keyword takes in a parameter, it must be either a built-in function or a numeric literal, float literal, or name of a numeric valued constant.
Constant folding is applied when stated in an expression, provided that the numeric expression has a constant value.
Syntax:%EDITFLT(numeric expression)
Example:
Result
%EDITW:
Numerical values with special characters such as asterisk(*), period(. ),comma(. ), cent sign(«), dollar sign($), minus sign(-), credit sign(CR), percentage(%), etc. can be formatted using the %EDITW function.
Additionally, it can be used to format a number in date format by using a slash(/).
Real-world scenarios frequently need us to provide reports with amount fields that look like $12,345.67-, $12,345.67CR, or ‘***12345.67-‘ rather than showing the amount as -12,345.67.
We can use the %EDITW Function in such report production programs to generate the output we desire.
Syntax:%EDITW(numeric : editword)
The input numeric value that we wish to modify is the first argument.
2nd parameter is the edit word option used to generate the required edited string.
Example:
Result
%ELEM:
The %ELEM function can be used to get the total number of elements present in a table, array, or multiple-occurrence data structure.
Stated alternatively, this function allows us to get the dimension.
Syntax: %ELEM(table_name)
%ELEM(array_name)
%ELEM(multiple_occurrence_data_structure_name)
Example:
Result
%EOF (Return End or Beginning of File Condition)
When carrying out a file action equal to the resultant indicator, this built-in function is utilized to identify end-of-file, beginning-of-file, or subfile full situations.
Therefore, we only use %EOF to determine whether the file’s end has been reached rather than looking for any resulting indications.
%EOF returns ‘1’ if the whole condition of the end-of-file, beginning-of-file, or subfile is detected; if not, it returns ‘0’.
If the file ends, READ, READC, and READE return %EOF=*ON.
If the beginning of the file is reached, READP and READPE return %EOF=*ON.
If a subfile detail record is provided with a subfile-full condition, the WRITE operation returns %EOF=*ON.
CHAIN operation on successful search sets %EOF=*OFF if %EOF=*ON and we execute the CHAIN operation.
In the event where %EOF=*ON occurs and a CHAIN operation is carried out, the successful search sets %EOF=*OFF.
On successful operations, SETGT, SETLL, OPEN, and %EOF=*OFF are set.
Syntax: %EOF(file_name)
Example:
%EQUAL:
In addition to the two operation codes SETLL and LOOKUP, %EQUAL is used.
It is used by the SETLL operation to indicate that it detected a record in the file with a key equal to that of the value specified in Factor 1.
Therefore, to verify that the record is present, we can use SETLL along with %EQUAL.
For the SETLL operation, this function returns ‘1’ if a record is present whose key or relative record number is mentioned in factor-1.
If an element is found that exactly matches the element in factor-1, this method returns ‘1’ for the LOOKUP operation.
Syntax: %EQUAL(file_name);
Example:
%LOWER :
Yields the string operand after it has been partially or fully converted to lowercase.
The string that needs to be changed from uppercase to lowercase is the first operand.
It may be UCS-2 or alphanumeric in type.
The conversion’s starting point is represented by the second operand.
Its value must be between one and the string’s length, and it must be a numeric expression with zero decimal places. It’s Optional.
The conversion begins at the first position in the string if it is not specified.
The length to be converted is the third operand. It must be less than or equal to the length of the string beginning at the start point, and it must be a numeric expression with zero decimal places. It might be zero. It’s not required.
Syntax: %LOWER(string {: start { : length } })
Example:
Result
%UPPER:
%UPPER returns the string operand, with all or part of the operand converted to upper case.
The conversion’s starting point is represented by the second operand.
Its value must be between one and the string’s length, and it must be a numeric expression with zero decimal places. It’s Optional. The conversion begins at the first position in the string if it is not specified.
The length to be converted is the third operand.
It must be less than or equal to the length of the string beginning at the start point, and it must be a numeric expression with zero decimal places. It might be zero. It’s not required.
Syntax: %UPPER(string {: start { : length } })
Example:
Result
%MAX:
The maximum value of its operands is returned by %MAX.
The operands must all have data types that are compatible for comparison with each other.
In the case that one item in the list is alphanumeric, the others in the list may be graphic, UCS-2, or alphanumeric. The others can be packed as integers, unsigned integers, binary decimal, float, zoned numeric, or packed numeric if one is packed numeric.
Operands cannot include items of type object or procedure-pointer.
There must be at least two operands. There is no practical upper limit for the number of operands.
Syntax: %MAX(item1 : item2 {: item3 { item4 … } })
Example:
Result
%MIN:
The minimum value of its operands is returned by %MIN.
The operands must all have data types that are compatible for comparison with each other
In the case that one item in the list is alphanumeric, the others in the list may be graphic, UCS-2, or alphanumeric.
The others can be packed as integers, unsigned integers, binary decimal, float, zoned numeric, or packed numeric if one is packed numeric.
Operands cannot include items of type object or procedure-pointer.
There must be at least two operands. There is no practical upper limit for the number of operands.
Syntax: %MIN(item1 : item2 {: item3 { item4 … } })
Example:
Result
%SCAN:
To determine the search argument’s first position in the source string, use the %SCAN function.
Position of the matching position is returned if a match is found; else, 0 is returned.
The search element that is being looked up in the source string is the function’s first parameter.
The source string that we are searching in is the second parameter.
The third parameter indicates the starting point for the search within the given string.
The type of the second argument should to match with the first. These parameters may be UCS-2, graphic, or character-based.
Seek arguments or source String can contain blanks in form of string or string padded with blank. Those blanks are also taken into consideration when performing the search.
Syntax: %SCAN(search argument : source string {: start position {: length}})
Example:
Result
%SCANR :
%SCANR returns the last position of the search argument in the source string, or 0 if it was not found.
The start position and length specify the substring of the source string to be searched.
The length and start positions are set to the default values of 1 and the remainder of the source string, respectively. The result is always the position in the source string even if the starting position is specified.
The search element that is being looked up in the source string is the function’s first parameter.
The source string that we are searching in is the second parameter.
The third parameter indicates the starting point for the search within the given string.
The type of the second argument should to match with the first. These parameters may be UCS-2, graphic, or character-based.
Seek arguments or source String can contain blanks in form of string or string padded with blank. Those blanks are also taken into consideration when performing the search.
Syntax: %SCANR(search argument : source string {: start position {: length}})
Example:
Result
%SCANRPL:
All instances of the scan string in the source string are replaced with the replacement string, and the resultant string is returned by the %SCANRPL function
Starting at the scan start position and continuing for the scan length is the search for the scan string.
The parts of the source string that are outside the range specified by the scan start position and the scan length are included in the result.
The first, second and third parameters must be of type character, graphic, or UCS-2. They may come in formats with variable lengths or set lengths.
Each of these factors needs to be CCSID and of the same type.
The starting position, expressed in characters, where the search for the scan string must begin is represented by the fourth argument. The start position is set to one by default if it is not given. The value could be anything from one to the source string’s current length.
The fifth parameter represents the number of characters in the source string to be scanned. If the parameter is not specified, the length defaults to remainder of the source string starting from the start position.
Syntax: %SCANRPL(scan string : replacement : source { : scan start { : scan length } } )
Example:
Result
%TRIM:
To remove blank space from a string on both sides, use the %TRIM function.
Other than blanks, it can also be used to trim characters. In argument 2, we can specify which characters should be removed.
Syntax: %TRIM(string {: characters to trim})
Example:
Result
%TRIML:
The %TRIML method is used to remove a string’s leading blank spaces.
Other than blanks, it can also be used to trim characters. In argument 2, we can specify which characters should be cut.
Syntax: %TRIML(string {: characters to trim})
Example:
Result
%TRIMR:
The %TRIMR function is used to remove a string’s trailing blank spaces.
Other than blanks, it can also be used to trim characters. In argument 2, we can specify which characters should be cut.
Syntax: %TRIMR(string {: characters to trim})
Example:
Result
%XLATE:
The string is translated by %XLATE based on the values of startpos, from, and to.
A list of characters that need to be replaced is contained in the first parameter, and the replacements are given in the second. The third character in to is replaced for every occurrence of the character in from, for instance, if the string contains the third character in from.
The string that needs to be translated is the third parameter. The translation’s starting point is the fourth parameter. Translation starts at position 1 by default.
The additional characters in the first parameter are ignored if it is longer than the second parameter.
The first three of parameters can belong to either character, graphic, or UCS-2 types. All three must have the same type.
Syntax: %XLATE(from:to:string{:startpos})
Example:
Result
%SUBST:
The string is partially extracted from any point using the %SUBST method.
The source string, from which we wish to extract a portion of the string, is the first parameter in this case.
The beginning point from which we will begin the string extraction process is the second argument.
The length to extract is the third argument.
Syntax: %SUBST(string:start{:length})
Example:
Result
%HOURS:
A number is converted to a duration (number of hours) using %HOURS.
This duration can be used to increase or decrease the value of a time or timestamp.
Therefore, we may obtain any past or future time by using %HOURS.
Syntax: %HOURS(number)
Example:
Result
%MINUTES:
A number is converted to a duration (number of minutes) using %MINUTES. This duration can be used to increase or decrease the value of a time or timestamp. Thus, we may obtain any past or future time by using %MINUTES.
Syntax: %MINUTES(number)
Example:
Result
%SECONDS:
To modify the duration of seconds in a time or timestamp, add a duration to the number using %SECONDS.
Syntax: %SECONDS(number)
Example:
Result
%SUBDT:
A subset of the data in a date, time, or timestamp value is extracted using %SUBDT.
It returns an unsigned numeric value.
The date, time, or timestamp value is the first parameter.
The part you wish to extract is the second parameter.
Syntax: %SUBDT(value : unit { : digits { : decpos } })
Example:
Result
%RANGE:
The IN operator is used with %RANGE. %RANGE can only be specified by following the IN operator; it does not return a value.
The IN operator determines to see if the first operand is within the range given by %RANGE when it is used with %RANGE.
The expression using the IN operator with %RANGE is true if the first operand of the IN operator is greater than or equal to the first operand of %RANGE and less than or equal to the second operand of %RANGE.
An array cannot be the IN operator’s initial operand.
The operands of %RANGE must be able to be compared to each other and to the first operand of the IN operator.
Syntax:
Example:
Result
%SQRT:
The square root of the given numeric expression is returned by the %SQRT function. If the operand is of type float, the result is of type float; otherwise, the result is packed decimal numeric. The parameter raises exception 00101 if its value is less than zero.
Syntax: %SQRT (numeric expression)
Example:
Result
%REPLACE
The segment of a string with the replacement string is used with the %REPLACE function.
Syntax: %REPLACE(replacement string: source string{:start position {:source length to replace}})
Example:
Result
%XFOOT:
The total of each element in the given numeric array expression is produced by using %XFOOT.
The precision of the result is the minimum that can hold the result of adding together all array elements, up to a maximum of 63 digits. The result’s decimal places are always the same as the array expression’s decimal places.
Syntax: %XFOOT (array-expression)
Example:
Result
%MSECONDS :
A number can be converted into a duration using %MSECONDS, which is able to be added to a time or timestamp value.
Only the plus or minus sign in an addition or subtraction expression can be followed by %MSECONDS. A time or timestamp must be the value that comes before the plus or minus sign. The result is a time or timestamp value with the appropriate number of microseconds added or subtracted. The resultant value is initially displayed in *ISO format.
Syntax: %MSECONDS(number)
Example:
Result
%ADDR :
A value of type basing pointer is returned by %ADDR. The address of the specified variable is the value. It may only be compared with and assigned to items of type basing pointer.
When *DATA is given as the second argument of %ADDR, the address of the data component of a variable-length field is returned by %ADDR.
The array index needs to be known at compile time if %ADDR with an array index parameter is used as a parameter for defining specification keywords INZ or CONST. Either a numerical literal or a numerical constant must be used as the index.
Syntax: %ADDR(variable)
%ADDR(varying-length variable : *DATA)
Example:
Result
%ALLOC:
A pointer to freshly allocated heap storage of the given length is returned by %ALLOC. The newly allocated storage is uninitialized.
The parameter must be a non-float numeric value with zero decimal places. The length must fall between 1 and the maximum dimension that is allowed.
The maximum size allowed depends on the type of heap storage used for RPG memory management operations due to the ALLOC keyword on the Control specification.
Syntax: %ALLOC(num)
Example:
Result
%BITAND :
The bit-wise ANDing of each argument’s bits is returned by %BITAND. That is, the result bit is ON when all of the corresponding bits in the arguments are ON, and OFF otherwise.
This built-in function accepts either character or numeric parameters. Numerical arguments are first converted to integer if they are not integer or unsigned. If the value does not fit in an 8-byte integer, a numeric overflow exception is issued.
There can be two or more arguments for %BITAND. Each parameter must be of the same type—a number or character. The types of the arguments and the result are the same.
Syntax: %BITAND(expr:expr{:expr…})
Example:
Result
%BITNOT:
%BITNOT returns the bit-wise inverse of the bits of the argument. In other words, the result bit is ON when the argument’s corresponding bit is OFF and OFF otherwise.
This built-in function accepts either a character or a number argument. Numerical arguments are first converted to integer if they are not integer or unsigned. If the value does not fit in an 8-byte integer, a numeric overflow exception is issued.
%BITNOT only accepts a single parameter. The types of the arguments and the result are the same. If all of the parameters are unsigned, the result for numerical arguments is unsigned; if not, it is an integer.
Syntax: %BITNOT(expr)
Example:
Result
%BITOR:
%BITOR returns the bit-wise ORing of the bits of all the arguments. In other words, the result bit is OFF otherwise and ON when any corresponding bit in the arguments is ON
This built-in function accepts either character or numeric parameters. Numerical arguments are first converted to integer if they are not integer or unsigned. If the value does not fit in an 8-byte integer, a numeric overflow exception is issued.
There may be two or more arguments for %BITOR. Each parameter must be of the same type—a number or character.
Syntax: %BITOR(expr:expr{:expr…})
Example:
Result
%SPLIT :
%SPLIT splits a string into an array of substrings. It returns a temporary array of the substrings.
%SPLIT can be used in calculation statements wherever an array can be used except:
- SORTA
- %ELEM
- %LOOKUP
- %SUBARR
The first operand is the string to be split. It can be alphanumeric, graphic, or UCS-2.
The second operand is the list of characters that indicate the end of each substring. It is optional. It must have the same type and CCSID as the first operand. If it is not specified, %SPLIT defaults to splitting at blanks.
If the length of the second operand is greater than 1, any of the characters in the second operand indicate the end of each substring.
For example, %SPLIT(‘abc.def-ghi’ : ‘.-‘) has two separator characters, ‘.’, and ‘-‘, so it returns an array with three elements: (‘abc’,’def’,’ghi’).
Syntax: %SPLIT(string {: separators })
Example:
Result
%BITXOR :
The bit-wise exclusive ORing of the two parameters’ bits is returned by %BITXOR. That is, the result bit is ON when just one of the corresponding bits in the arguments are ON, and OFF otherwise.
This built-in function accepts either a character or a number argument. Numerical arguments are first converted to integer if they are not integer or unsigned. If the value does not fit in an 8-byte integer, a numeric overflow exception is issued.
%BITXOR requires a pair of arguments. The types of the arguments and the result are the same. If all of the parameters are unsigned, the result for numerical arguments is unsigned; if not, it is an integer.
Syntax: %BITXOR(expr:expr)
Example:
Result
%Error :
If an error condition was encountered during the most recent operation using the requested extender ‘E,’ then %ERROR returns ‘1.
This is equivalent to having the operation’s error indicator turned on. Before an operation with extender ‘E’ specified begins, %ERROR is set to return ‘0’ and remains unchanged following the operation if no error occurs.
The built-in function %ERROR can be set by any action that allows the use of an error indicator. The CALLP operation can also set %ERROR.
Example:
Result
%SIZE :
The number of bytes that the element occupies is returned by the %SIZE function.
A named constant, data structure, array, field, literal, etc. can all be used as arguments.
%SIZE returns full length for a field with a null value.
For an array or multiple occurrence data structure, the elements or occurrences size is additionally considered if *ALL is given as the second option for %SIZE.
Syntax: %SIZE(variable)
%SIZE(array{:*ALL})
%SIZE(table{:*ALL})
%SIZE(multiple occurrence data structure{:*ALL})
Example:
Result
%TIMESTAMP :
To convert a string into a timestamp data type, use the %TIMESTAMP method.
Syntax: %TIMESTAMP (value : *ISO | *ISO0 )
The input value that we wish to convert to a timestamp is the first parameter in this case.
The second option, which informs us of the input string’s timestamp format, can also be mentioned.
Example:
Result
%FOUND:
If the most recent operation finds out a relevant or matching record, %FOUND returns ‘1’; however, an exact match is not assured.
In the event that no match is found, ‘0’ is returned.
Syntax: %FOUND{(file_name)}
Example:
%MAXARR:
%MAXARR returns the index of the maximum value in the array, or the subsection of the array identified by the start-element operand and the number-of-elements operand.
Syntax: %MAXARR(array {: start-index {:number-of-elements}})
Example:
Result
%MINARR:
%MINARR returns the index of the minimum value in the array, or the subsection of the array identified by the start-element operand and the number-of-elements operand.
Syntax: %MINARR(array {: start-index {:number-of-elements}})
Example:
Result
%STATUS:
The program or file status’s most recent value is returned by the %STATUS function. %)STATUS is set anytime there is a change in the status of any file or program, usually as a result of an error.
The most recent program or file status update is returned if %STATUS is used without the optional file_name parameter. If a file is specified, the value contained in the INFDS *STATUS field for the specified file is returned. It is not necessary to specify the INFDS for the file.
%STATUS starts with a return value of 00000 and is reset to 00000 before any operation with a ‘E’ extender specified begins.
Syntax: %STATUS{(file_name)}
Example:
%LEN:
You can set the current length of a variable-length field, find the maximum length of a varying-length expression, or retrieve the length of a variable expression using %LEN
A figurative constant cannot be the parameter.
Syntax: %LEN(expression) or %LEN(varying-length expression : *MAX)
Example:
Result
%REM :
When operands n and m are divided, the remainder is returned by %REM. Numerical values with zero places in decimals must be the two operands. If either operand is a packed, zoned, or binary numeric value, the result is packed numeric. An integer is the result if either operand has an integer numeric value.
If not, unsigned numeric is the outcome. Numerical operands that float are not permitted. The sign of the outcome and the dividend are the same.
Syntax: %REM(n:m)
Example:
Result
%INT:
The built-in %INT function converts the numeric expression’s value to an integer.
Syntax: %INT(NumericExpression)
Example:
Result
%MSG:
The second operand in the SND-MSG operation is %MSG. Other than for the SND-MSG operation, %MSG cannot be provided and does not return a value.
Sending the message is specified by %MSG.
The message ID is the first operand. It must be a character expression in the job CCSID. The message ID consists of seven characters. The operand’s remaining characters must be blank if its length exceeds 7. The message ID needs to be present in the message file at run-time.
The message file is the second operand. In the CCSID task, it needs to be a character expression. One of the following formats is possible for it:
- MYMSGF
- MYLIB/MYMSGF
- *LIBL/MYMSGF
There is an optional third operand. It specifies the replacement text for the message. It could be a data structure or a character value in the job CCSID
Syntax:
Example:
I can use the %MSG BiF to send that message to the job log. Before I can show that I am going to need to have a message file, and a message within it I can use.
%STR:
Null-terminated character strings, which are frequently used in C and C++ applications, can be created or utilized with the %STR symbol.
A value for a starting point must be the first parameter. (Any expression that starts with “%ADDR(DATA)” or “P+1” is acceptable as a basing pointer.) The second parameter, if specified, must be a numeric value with zero decimal positions.
If not specified, it takes the longest character variable definition provided by default.
The first parameter must point to storage that is at least as long as the length given by the second parameter.
Syntax: %STR(basing pointer{: max-length})(right-hand-side)
%STR(basing pointer : max-length)(left-hand-side)
Example:
I can use the %MSG BiF to send that message to the job log. Before I can show that I am going to need to have a message file, and a message within it I can use.
%OPEN:
When a file is given and opened, %OPEN returns ‘1’. When a file is opened by the RPG module at initialization or through an OPEN operation and hasn’t been closed since, it’s referred to be “open”.
The file is considered to be closed and %OPEN returns ‘0’ if it is dependent on an external indicator and that indicator was turned off during module initialization.
Syntax: %OPEN(file_name)
Example:
%UNS :
The expression’s value is converted to unsigned format using %UNS. Any decimal digits are truncated. An array index can be created by truncating the decimal places of a float or decimal value using %UNS.
If a character expression is used as the parameter
See Rules for converting character values to numeric values using built-in functions for the rules for character expressions for %DEC.
Floating point data cannot be used, such as ‘1.2E6’.
Floating point data is not allowed. In other words, when the numerical value is followed by an exponent (as in ‘1.2E6’) and E.
If invalid numeric data is found, an exception occurs with status code 105
Syntax: %UNS(numeric or character expression)
Example:
Result
%UNSH:
%UNSH and %UNS are equivalent, with the exception that when converting an expression to an integer type, half an adjustment is applied to the expression’s value if it is a decimal, float, or character value. No message is issued if half adjust cannot be performed.
Syntax: %UNS(numeric or character expression)
Example:
Result
%TLOOKUPxx
The current table element for the search table is set to the element that satisfies the condition if a value meets the specified condition; if not, the function returns the value *ON and sets the current table element for the alternate table to the same element.
*OFF is returned if no value fulfills the required criteria.
Any type is acceptable for the first two arguments, but they must be of the same type. They do not need to have the same length or number of decimal positions.
Unless arg or search-table is defined with ALTSEQ(*NONE), the ALTSEQ table is used.
Built-in functions %FOUND and %EQUAL are not set following a %LOOKUP operation.
%TLOOKUP
An exact match.
%TLOOKUPLT
The value that is closest to arg but less than arg.
%TLOOKUPLE
An exact match, or the value that is closest to arg but less than arg.
%TLOOKUPGT
The value that is closest to arg but greater than arg.
%TLOOKUPGE
An exact match, or the value that is closest to arg but greater than arg.
Syntax:
%TLOOKUP(arg : search-table {: alt-table})
%TLOOKUPLT(arg : search-table {: alt-table})
%TLOOKUPGE(arg : search-table {: alt-table})
%TLOOKUPGT(arg : search-table {: alt-table})
%TLOOKUPLE(arg : search-table {: alt-table})
Example:
Result
%TARGET:
The third operand in the SND-MSG operation is %TARGET. Other than for the SND-MSG operation, %TARGET cannot be provided and does not return a value.
%TARGET specifies the target program or procedure for the message.
*SELF may be used as the first operand. This is the default for an informational message. The current procedure receives the message.
*CALLER. This is the default for an escape message. The message is sent to the caller of the current procedure.
The name of a program or procedure on the program stack. It has to be a CCSID character value.
The second operand is the offset on the program stack. It is optional. If it is not specified, it defaults to zero. It must be a numeric value with zero decimal positions.
The value cannot be negative.
Example:
%SUBARR:
Built-in function %SUBARR returns a section of the specified array starting at start-index. The optional number-of-elements parameter specifies how many elements will be returned. The number-of-elements defaults to the remaining value in the array if it is not supplied.
%SUBARR requires an array as its first parameter. That is, an array-defined standalone field, data structure, or subfield. The first parameter must not be a table name or procedure call.
There must be a numeric value with zero decimal places for the start-index argument. A float numeric value is not allowed. The value must be less than or equal to the array’s element count and higher than or equal to 1.
A integer value with zero decimal places must be entered for the optional number-of-elements argument.
Syntax: %SUBARR(array:start-index{:number-of-elements})
Example:
Result
%SHTDN :
%SHTDN returns ‘1’ if the system operator has requested shutdown; otherwise, it returns ‘0’.
Syntax: %SHTDN
Example:
Result
%FIELDS:
A file can be partially updated by using the %FIELDS function. To put it another way, we might just need to edit one or two fields in a file. For that we use this funtion.
We specify the name of the field we want to edit in the file in the %FIELDS argument. Only the mentioned fields are updated.
Syntax: %FIELDS(name{:name…})
Example:
Result
Before:
After:
%PARMS:
The number of parameters given to the procedure where %PARMS is used is returned by %PARMS. *PARMS and %PARMS are the same for the main procedure.
Example:
Result
%GRAPH:
%GRAPH returns a graphic value after converting the expression’s value from character, graphic, or UCS-2. If the parameter varies in length, the outcome will also vary in length.
The CCSID of the resulting expression is indicated by the second parameter, ccsid, which is optional. Control keyword CCSID(*GRAPH) specifies the default graphic CCSID of the module, which is the default CCSID by default.
The built-in %GRAPH is prohibited if CCSID(*GRAPH: *IGNORE) is mentioned in the control specification or presumed for the module.
Syntax: %GRAPH(char-expr | graph-expr | UCS-2-expr { : ccsid })
Example:
%INTH:
%INTH and %INT are equivalent, with except that when converting an expression to an integer type, half of the expression’s value is adjusted if it is a decimal, float, or character value. No message is issued if half adjust cannot be performed.
Syntax: %INTH(numeric or character expression)
Example:
Result
Operation Codes and Extenders
Definition:
Opcode extenders in IBM i are like modifiers for commands in programming. They help you specify details or customize the behavior of an operation, making it more flexible and tailored to your specific needs in the program you’re writing.
Types of Opcode extenders
Opcode extenders | Description |
---|---|
A | This extension is utilized on the DUMP operation to ensure that it is always executed regardless of the DEBUG option set on the H specification. |
H | This extension is used to half adjust (round) the result of a numeric operation. |
N |
|
P | Pads the result field with blanks. |
D |
|
T | Denotes a time field. |
Z | Refers to a timestamp field. |
M | Specifies default precision rules. |
R | Refers to “Result Decimal Position” precision rules. |
E | Handles error conditions. |
Uses of Opcode extenders
- For Assigning values.
- For Arithmetic operations.
- For Strings operations.
- For date/time/timestamp operations.
- For File operations
Opcodes extenders for Assigning values.
We have opcode extenders that are used while assigning numeric and string values. Below are the definitions of the work variables that we are using to perform operations using opcode.
- Eval(H): Half adjust (round) of the numeric value while evaluating a variable or parameter.
At line 31, once the eval opcode executes, it will add the values of a and b (i.e., 10.25 and 10.20) and assign a 20.45 value to the Result2 variable.
At line 34, once the eval opcode executes, it will add the values of a and b (i.e., 10.25 and 10.20) and assign a 20.4 value to the Result3 variable as it has only one decimal.
At line 37, once the eval opcode executes with Half Extender, it will add the values of a and b (i.e., 10.25 and 10.20) and assign a 20.5 value to the Result variable by rounding the value from 20.45 to 20.5. - Move(P) & Movel(P): Pad the string value with blank while moving from one variable to another.
At line 41, once the move opcode executes, it will move the value from A1 (‘AAAAA’) to B2(‘BBBBBBBBBB’) and assign the ‘BBBBBAAAAA’ value to the B2 variable.
At line 45, once the move opcode with P extender executes, it will move the value from A1 (‘AAAAA’) to B2(‘BBBBBBBBBB’) and assign the ‘ AAAAA’ value to the B2 variable.
At line 49, once the Movel opcode executes, it will move the value from A1 (‘AAAAA’) to B2(‘BBBBBBBBBB’) and assign the ‘AAAAABBBBB’ value to the B2 variable.
At line 53, once the Movel opcode with P extenders executes, it will move the value from A1 (‘AAAAA’) to B2(‘BBBBBBBBBB’) and assign the ‘AAAAA ’ value to the B2 variable. - Eval(M): It evaluates the default decimal value in a variable.
At line 57, once the Eval opcode with M extender executes, it will evaluate the value 2.80000 to the Result4.
- Eval(R): It evaluates the result decimal position precision value in a variable.
At line 57, once the Eval opcode with R extender executes, it will evaluate the value 2.85714 to the Result4.
Opcodes extenders for Arithmetic operations.
We have opcode extenders that are used while performing arithmetic operations. Below are the definitions of the work variables that we are using to perform operations using opcode.
- Add(H): Half adjust (round) of the numeric value while adding the values of the variables.
At line 64, once the Add opcode executes, it will add the values of a and b (i.e., 10.25 and 10.20) and assign a 20.45 value to the Result2 variable.
At line 66, once the Add opcode executes with Half Extender, it will add the values of a and b (i.e., 10.25 and 10.20) and assign a 20.5 value to the Result variable by rounding the value from 20.45 to 20.5.
- Z-Add(H): Half adjust (round) of the numeric value while adding zero to factor 2.
At line 72, once the Z-Add opcode executes with Half Extender, it will add the value of b and Result (i.e., 10.20 and 0) and assign a 10.2 value to the Result variable by rounding the value from 10.20 to 10.2.
Note: Result value will automatically change from 50 to zero as per Z-ADD opcode, it will convert the Result value to zero and then add with factor-2.
0 + Factor 2 (numeric) Result field
- Sub(H): Half adjust (round) of the numeric value while subtracting the values from one variable to another.
At line 78, once the Sub opcode executes, it will subtract the value from e to f (i.e., 20.55 and 10.20) and assign a 10.35 value to the Result2 variable.
At line 82, once the Sub opcode executes with Half extender, it will subtract the value from e to f (i.e., 20.55 and 10.20) and assign a 10.4 value to the Result variable by rounding the value from 10.35 to 10.4.
- Z-Sub(H): Half adjust (round) of the numeric value while subtracting factor-2 from 0.
At line 88, once the Z-Sub opcode executes with Half extender, it will subtract the value from Result to b (i.e., 0 and 10.20) and assign –10.2 value to the Result variable by rounding off the value from 10.20 to 10.2.
Note: Result value will automatically change from 50 to zero as per Z-SUB opcode, it will convert the Result value to zero and then subtract it to factor-2.
0 – Factor 2 (numeric) – Result field
- MULT(H): Half adjust (round) of the numeric value while multiplying the value of variables.
At line 93, once the Mult opcode executes, it will multiply the value of a and b (i.e., 10.25 and 10.20) and assign a 104.55 value to the Result2 variable.
At line 96, once the Mult opcode executes with Half extender, it will multiply the value of a and b (i.e., 10.25 and 10.20) and assign 104.6 value to the Result variable by rounding the value from 104.55 to 104.6.
- DUMP(A): It is used to perform the dump operation to ensure the operation occurs regardless of the debug option set in the H specification.
At line 44, once the Eval opcode executes, it will try to divide the Num1 by Num2 (i.e., 100 and 0) and assign the default value 1 to Result as 100/0 is not possible.
Opcodes extenders for Strings operations.
We have opcode extenders that are used while performing string operations. Below are the definitions of the work variables that we are using to perform operations using opcode.
- SUBST(E P): Error handling or padding with blank while substring the string variable.
At line 35, once the Subst opcode executes, it will substring the variable Target (‘XXXXXXXX’) with String1 (‘TEST123’) from starting position T (5) till length (3) and substring the value of target from ‘XXXXXXXX’ to ‘T12XXXXX’.
At line 37, once the Subst opcode with E extender executes, it will try to substring the variable Target (‘XXXXXXXX’) with String1 (‘TEST123’) from starting position X (22) till length (3) but display the ‘Error’ message as the 22 index is not present in Target variable.
At line 43, once the Subst opcode with P extender executes, it will substring the variable Target (‘XXXXXXXX’) with String1 (‘TEST123’) from starting position T (5) till length (3) and substring the value of target from ‘XXXXXXXX’ to ‘T12 ’.
- SCAN(E): Error handling while searching the string.
At line 46, once the Scan opcode executes, it will try to search the blank in the variable String (‘Search String’) from starting position K (5) and 7 will assign it to the Pos variable.
At line 46, once the Scan opcode with E extender executes, it will try to search the blank in the variable String (‘Search String’) from starting position X (22) but display the ‘Error’ message as the 22 index is not present in a String variable.
- XLATE(E P): Error handling or padding with blank while translating from character to character by the protocol specified in factor-1.
At line 55, once the Xlate opcode executes, it will try to translate the Chgcase2 (‘rpg dept’) variable with Result2 (‘XXXXXXXXXXXXXXX’) by the protocol specified in factor-1 and assign the new value ‘RPG DEPT XXXXX‘ to the Result2.
At line 60, once the Xlate opcode with P extender executes, it will try to translate the Chgcase2 (‘rpg dept’) variable with Result2 (‘XXXXXXXXXXXXXXX’) by the protocol specified in factor-1 and assign the new value ‘RPG DEPT ‘ to the Result2 while padding the ‘XXXXX’ with blank of Result2.
At line 68, once the Xlate opcode with E extender executes, it will try to translate the Chgcase2 (‘rpg dept’) variable with Result2 (‘XXXXXXXXXXXXXXX’) from starting position X (22) by the protocol specified in factor-1 but display the ‘Error’ message as the 22 index is not present in Chgcase2 variable.
- CHECK(E): Error handling while checking the non-occurrence of a character in a string.
At line 73, once the Check opcode executes, it will try to check the factor-1 (‘ABCD) in the factor-2 variable Substring (‘AABC1ABD2AV3A’) and 5 will assign it to the Pos variable.
At line 76, once the Check opcode executes, it will try to check the factor-1 (‘ABCD) in factor-2 variable Substring (‘AABC1ABD2AV3A’) from starting position T (5) and 20 will assign to Pos variable.
At line 78, once the Check opcode with E extender executes, it will try to check the factor-1 (‘ABCD) in factor-2 variable Substring (‘AABC1ABD2AV3A’) from starting position X (22) but display the ‘Error’ message as the 22 index is not present in substring variable.
Opcodes extenders for date/time/timestamp operations.
We have opcode extenders that are used while performing date/time/timestamp operations. Below are the definitions of the work variables that we are using to perform operations using opcode.
- Test(EDTZ): Validate the date, time or timestamp.
At line 29, once the Test opcode with Z extender executes, it will try to test the timestamp Char_Tstmp (‘19960723140856834000’) with *ISO format and turn off the error indicator 18 as the Char_Tstmp1 is the correct timestamp.
At line 30, once the Test opcode with Z extender executes, it will try to test the timestamp Char_Tstmp1 (‘190723140856834000’) with *ISO format and turn on the error indicator 18 as the Char_Tstmp1 is not the correct timestamp.
At line 31, once the Test opcode with Z and E extender executes, it will try to test the timestamp Char_Tstmp1 (‘190723140856834000’) with *ISO format and display the message ‘Invalid Fmt’ as the Char_Tstmp1 is not correct timestamp.
At line 36, once the Test opcode with D extender executes, it will try to test the date Char_Date (‘041596’) with *MDY format and turn off the error indicator 19 as the Char_Date is the correct date.
At line 37, once the Test opcode with D and E extender executes, it will try to test the date Num_Date (‘910921’) with *DMY format and display the message ‘Invalid Fmt’ as the Num_Date is in YYMMDD format, but we are checking in*DMY format.
At line 41, once the Test opcode with E and T extender executes, it will try to test the time Char_Time (‘13:05 PM’) with *USA format and it won’t give any error as the format of Char_Time matches with the *USA format.
Opcodes extenders for file operations.
We have an ‘N’ opcode extender that is used to make the record not locked while reading. Before is the simple example of an N opcode extender.
On line 9, we have declared one file PIOFILE in update mode, and on line number 10, we are reading the same file 10 times from 1 to 10. Once line 21 executes with the N opcode extender, the record is not locked while reading operation and if any update/delete operation is performed then it will execute without any error.
Indicators
- Indicators defined on the RPG/400 specifications.
- Indicators not defined on the RPG/400 specifications.
1. Indicators defined on the RPG/400 specifications.
1. Indicators defined as variable in RPGLE program –
Fix-format syntax for standalone indicator:
Name | Declaration Type | To / Length Internal | Data Type |
---|---|---|---|
Name-of-Indicator | S | 1 | N |
Declaration type ‘S’ can be ignored if it is not a standalone variable.
Free format syntax:
dcl-s Name-of-Indicator ind;
Code Examples –
Fix-format:
In above screenshot –
On line 17.00: We are turning on isExist indicator if record is found on line 16.00.
On line 21.00: We have used isExist indicator, if record was found in EMPPF file and also found in EMPDEPT file then processing the file on line 22.00.
Free format:
2. Overflow indicator –
The PRINTER file lines that will be produced when there is an overflow are determined by the overflow indicator specified by the OFLIND keyword.
OFLIND keyword only works with PRINTER devices.
Automatic page ejection upon overflow (default overflow processing) takes place in the absence of the OFLIND keyword.
Syntax:
OFLIND(indicator-name);
In OFLIND, we can use below valid indicators –
*INOA to *INOF, *INOV:
These indicators we can use in a program described printer file to handle conditions when overflow line is reached. These indicators not valid for externally described files.
*IN01 to *IN99:
These indicators we can use, when the overflow line is reached or passed during a space or skip operation.
Name:
This can be a variable name which has indicator type. We can use this indicator when the overflow line is reached. The program must handle the overflow condition.
It has same behaviour as for indicators *IN01 to *IN99.
Code Examples –
Fix-format:
In above screenshot, there is an externally described printer file CUSREPORT declared with OFLIND keyword which has indicator *IN99
*IN99 will be turn on when overflow occurs in respective printer files CUSREPORT.
In above screenshot, we are handling overflow with *IN99 indicator. When overflow occurs, it will print the heading again.
Free format:
3. Display file indicator integration using address of indicators –
By assigning variables to the address of indicators, we can utilize readable names instead of indicators in RPG programs.
To do this –
- There should be INDARA keyword at file level in display file. It provides the functionality to use indicator data structure in RPG program.
- Declare display file in RPG program with INDDS keyword with indicator data structure name.
- Declare a pointer variable by initializing with address of indicators.
- Declare indicator data structure with based on pointer variable.
- Now, we can use subfields of indicator data structure in RPG program.
Code Examples –
INDARA keyword declared at file level in display file –
RPGLE Program declaration to use indicator data structure –
Line 0003.00 is a display file declaration, we have used INDDS keyword to give name of the indicator data structure which is indicatorDs in this example.
Line 4.00 is a declaration of pointer variable initialized with the address of indicators.
Line 5.00 is an indicator data structure which is based on pointer variable declared in line 4.00.
Line 6.00 is a subfield of indicatorDs, it has indicator data type, it relies on position 5, so it can be used in place of *IN05.
In Line 31.00, we have a variable ‘previous’ which is a subfield of indicator data structure, it is pointing to the address of indicator ‘*IN12’.
So, when previous will be turned on, then *IN12 will also turn on automatically.
4. Control Level Indicators (L1-L9)
The control level indicators (L1 to L9) are used to manage program flow, especially within loops and conditional statements. These indicators are set and checked during the execution of the program.
Certainly! Here’s an example that demonstrates how you can use control level indicators with physical and logical files in RPG on an IBM i system. In this example, we’ll create a simple program that reads records from a physical file, applies some conditions using control level indicators, and writes the selected records to a logical file.
Assuming you have two files:
There are 2 logical files EMPIOREC and EMPIOTIM.
Here’s an RPG program that increase the count of the variable when level break occurs.
- A comment indicates that when a level break occurs on `L2` (PIO_DIVSON), it should add 1 to the `PIOCNT` variable.
- The line ‘L2 PIOCNT ADD 1 PIOCNT 40’ suggests that if a level break occurs on ‘L2’, it will add 1 to the ‘PIOCNT’ variable.
5. Function Key Indicator
Function keys are specified in the DDS with the CFxx (command function) or CAxx (command attention) keyword. For example, the keyword CF01 allows function key 1 to be used. When you press function key 1, function key indicator KA is set on in the RPG program. If you specify the function key as CF01 (99), both function key indicator KA and indicator 99 are set on in the RPG program. If the work-station user presses a function key that is not specified in the DDS, the IBM® i system informs the user that an incorrect key was pressed.
Certainly! Below is an example RPGLE program and DDS source code that follows the specified requirements. In this example, the program reads the display file, and when the user presses Function Key F1 or F2, it sets the corresponding indicators in the RPG program. If the user presses an incorrect key, an error message is displayed.
RPGLE code.
DDS code
In this example, the display file (MyDisplay) has two input fields (FIELD1 and FIELD2). The indicators KA and KB are associated with Function Keys F1 and F2, respectively. The program (MyProgram) reads the display file and checks the values of KA and KB to determine which function key was pressed. If an incorrect key is pressed, an error message is displayed. Adjust the logic inside the ProcessF1 and ProcessF2 subroutines based on your specific requirements for handling each function key.
Below is the table for the function key indicator with its corresponding key.
Function Key Indicator | Corresponding Function Key | Function Key Indicator | Corresponding Function Key |
---|---|---|---|
KA | 1 | KM | 13 |
KB | 2 | >KN | 14 |
KC | 3 | KP | 15 |
KD | 4 | KQ | 16 |
KE | 5 | KR | 17 |
KF | 6 | KS | >18</td |
KG | 7 | KT | 19 |
KH | 8 | KU | 20 |
KI | 9 | KV | 21 |
KJ | 10 | KW | 22 |
KK | 11 | KX | 23 |
KL | 12 | KY | 24 |
6. Halt Indicator (H1-H9)
The Halt indicators are used to handle the error while running of a program. It can be used with record identifying indicators, field indicators, or resulting indicators.
Certainly! In RPG II, you can use the H1 indicator as a halt indicator. Below is a simple example:
The H1 indicator is used as a halt indicator. When H1 is turned on, the program will stop processing.
The program goes through the usual sequence of input, processing, and output operations.
The PROCESSDATA subroutine checks the condition (*IN99) and increments a counter (NUMOFRECORDS) for demonstration purposes. If the condition is met (in this case, *IN99 is on), the program moves a message to the MSG field, displays it, and sets *INLR to halt the program.
2. Indicators not defined on the RPG/400 specifications.
- Internal IndicatorsThe internal indicators, often referred to as “I” indicators, are special variables used to control the flow of a program and handle various operations and conditions. These indicators are used for decision-making, error handling, and controlling the logic of a program.
- First Page Indicator (1P): – Definition: The first page (1P) indicator is set on by the RPG IV program when the program starts running and is set off by the RPG IV program after detail time output.-Usage: The first record will be processed after detail time output. The 1P indicator can be used to condition heading or detail records that are to be written at 1P time.
- Last Record Indicator (LR): – Definition: The Last Record Indicator (LR) is used to identify the last record in a report or file. – Usage: Typically, LR is set to *ON for the last record in a report or file to indicate the end of the report or file. Example:
or
or
In this example, the LR indicator is specified in the printer file’s output specification. It is automatically set to *ON for the last record in the output.
- Matching Record Indicator (MR): – Definition: The Matching Record Indicator (MR) is used to indicate that two or more fields in a record match specified criterion.– Usage: MR is often used in program logic to identify matching records based on specified conditions.Example:
Three files are used in matching records. All the files have three match fields specified, and all use the same values (M1, M2, M3) to indicate which fields must match. The MR indicator is set on only if all three match fields in either of the files EMPMAS and DEPTMS are the same as all three fields from the WEEKRC file.
The three match fields in each file are combined and treated as one match field organized in the following descending sequence:
DIVSON
M3
DEPT
M2
EMPLNO
M1 - Return Indicator (RT):– Definition: The Return Indicator (RI) is used to determine whether a subroutine or called program has executed successfully and returned a result.– Usage: The test to determine if RT is on is made after the test for the status of LR and before the next record is read. If RT is on, control returns to the calling program. RT is set off when the program is called again.
- External Indicators used as JOB Indicators –There are 8 external indicators, U1 through U8 which can be set in a CL program or in an RPGLE program.In a CL program, they can be set by the SWS (switch-setting) parameter on the CL commands CHGJOB (Change Job) or CRTJOBD (Create Job Description).In an RPGLE program, they can be set by direct assignment or using any assignment opcodes.
Code examples –
In above screenshot, it is a CL program logic.
On line 2, Indicator U8 is set to turn on by SWS parameter on CL command CHGJOB.
In SWS parameter of CL command CHGJOB, we can set 8 indicators (U1 through U8).
- Type ‘1’ on corresponding position to turn on the indicator.
- Type ‘0’ on corresponding position to turn off the indicator.
- Type ‘X’ on corresponding position for no change of the indicator.
On line 3, it is calling GETCUSTDTL program. In this program we can handle the program flow by conditioning with these indicators.
Fix format:
Above is the screenshot for the logic of GETCUSTDTL program, which is handling *INU8 indicator. If it is turned on, then process only ENG customers otherwise process all customer.
Free format:
CL program example to turn on U8 indicator by SWS parameter of CRTJOBD CL command.
In RPGLE program, job indicators can be set and use in other calling/called program as well to maintain the program flow.
It can be set by any assignment opcodes like Eval, Move.
Fix format:
Free format:
Directives
Complier Directives
Compiler Directive is an instruction or direction given to the compiler –
- To perform some specific tasks during compilation.
- To generate customize compiler listing report after compilation.
Compiler Directives can be used for many purposes, like :-
- To control the spacing of the compiler listing.
- To include source statement from another source member.
- To do a free form calculation in our RPGLE/SQLRPGLE program.
- To control the source records selection/omission based on some condition.
- To control the heading information in compiler listing.
Compiler directives are divided into two types:
- Compiler directive statements. For example- /TITLE, /EJECT, /COPY and /INCLUDE etc.
- Conditional compiler directives, these allow us to select or omit the source line. For example- /IF, /ELSEIF, /ENDIF, /ELSE , /EOF etc.
Let’s go through the compiler directives one by one:
A. /TITLE
It is used to add heading information in the compiler listing. Its position on the source code is 7-12 in fixed format. From 14-100 we can give the information about the title.
Few important points:
- We can use more than one /TITLE statement in one program.
- Each /TITLE statement provides heading information for the segment of compiler listing until another /TITLE statement is encountered.
- The /TITLE statement is printed in addition to compiler heading information.
- Each /TITLE statement is printed on a new page.
Example:
In the below example, we have used TITLE directive and along with this, some information of that title, so in the compiler listing it will show this Title information and the last Title which is written as Main code in the example will be listed till last in the compiler listing.
After Compilation:
As we can see the title heading are listed in the compiler listing and the last heading MAIN CODE is listed till last as it was the last heading with directive /TITLE.
B. /EJECT
It is used to make the compiler add new pages in the compiler listing. The new pages will be added from the line where the /EJECT is specified in the code or source.
Its position on the source code is 7-12 in fixed format.
Example:
In the below example we have used /EJECT.
After Compilation:
After /EJECT the compiler will add a new page for the source listing from the line where /EJECT is used in the source code.
C. /FREE and /END-FREE
With the help of this, we can write the codes in free format. For this, we need to enclose our code between /FREE and /END-FREE.
It’s position on the source code is 7-11 in fixed format.
NOTE: It is no longer needed , it is required only if your IBM i does not have the PTF for “free format definition” RPG, that was released along with IBM i 7.1 TR7.
Example:
In the below example, we can see how we can use /Free and /End-Free directive in our code.
Here we can see we have first used the fixed format then if we want to do code in free format we can use this directive.
D. /COPY And /INCLUDE
Both /COPY and /INCLUDE is used to add source records to the current program from other source members. Both directives have the same purpose and syntax, but they are handled differently by SQL preprocessor.
- The /COPY directive is expanded by the preprocessor. The copied file or source can contain embedded SQL or host variables.
- The /INCLUDE directive is not expanded by the preprocessor. The included file or source cannot contain embedded SQL or host variables.
/COPY and /INCLUDE files can be either physical files or IFS files. Its position on the source code is 7-12 in fixed format.
Syntax:
To specify a physical file, the library, file name, and member name, we can use any one of the below formats:
- Library name/source filename, member name.
Example- /COPY PIOLIB/QRPGLESRC,COPY_SRC - Source filename, member name.
Example- /COPY QRPGLESRC,COPY_SRC - Member name.
Example- /COPY COPY_SRC
Important point regarding syntax:
- The member’s name must be specified.
- If the source file is not specified, then QRPGLESRC is assumed.
- If the Library is not specified, then library taken from the *LIBL (Library list).
Example: In the below example we have used a copy book.
After compilation:
When we compile the above program, in the compiler listing the /COPY is replaced by the actual source which we have written in the copy book source.
In the compiler listing
E. /SPACE
This directive is used to control the line spacing within the source section of compiler listing.
F. /SET And /RESTORE
/SET directive is used to temporarily set a new default value for definitions and to reverse the effect of /SET, we can use /RESTORE.
With /SET directive, we can use the following keywords:
- CCSID(*Char: ccsid)
Syntax : CCSID(*Char: *JOBRUN or *JOBRUNMIX or *UTF8 or *HEX or number). - CCSID(*GRAPH: ccsid)
Syntax : CCSID(*GRAPH: *JOBRUN or *HEX or *SRC or *IGNORE or number). - CCSID(*UCS2: ccsid)
Syntax : CCSID(*UCS2: *UTF16 or number). - DATFMT(format)
Syntax : DATFMT( fmt{separator}). - TIMFMT(format)
Syntax : TIMFMT( fmt{separator}).
Efficient way of using these directives:
- We can specify the SET directive in a copy file so that all modules that include the copy file use the same values for the time and date formats and the CCSIDs.
- We can also code the /SET directive prior to the /COPY or /INCLUDE directive, and then code the /RESTORE directive after the /COPY or /INCLUDE directive to restore the defaults to the values that were previously in effect before the /SET directive.
Some important point :
- We can nest /SET directives.
- The keywords specified on a /RESTORE directive do not have to exactly match the keywords specified on the previous /SET directive.
- A /RESTORE directive can restore some or all the values set by any previous /SET directives.
Example:
In the below example, we have used /SET directive to set the ccsid of *char and used /RESTORE to reset the CCSID
After Compilation:
We got the below result, as we have declared the string2 variable after setting the ccsid of *char using /SET and just after declaring string2 variable we have restored the ccsid using /RESTORE directive. So we got the below result for string2.
String1 is declared before the setting ccsid and string3 is after resetting the ccsid ,so we get the character values of the string1 and string3.
G. /IF, /ELSEIF, /ELSE, /ENDIF, /DEFINE and /UNDEFINE
- /IF compiler directive is used to do the conditional compilation.
- /IF can be followed by one or more /ELSEIF, followed by an optional /ELSE, and finished with a /ENDIF.
- If the condition expression is true, source lines following the /IF directive are included in the current source to be read by the compiler. Otherwise, lines are excluded until the next /ELSEIF, /ELSE or /ENDIF in the same /IF group.
- /DEFINE directive “Set On” a condition and /UNDEFINE directive “Set Off” a condition.
- Basically, it is used to define an element that will be used as a condition element for /IF, /ELSEIF directive.
Entry position of /IF :
7-9 = /IF
11-80 = conditional expressionEntry position of /ELSEIF :
7-13 = /ELSEIF
15-80 = conditional expressionEntry position of /ELSE :
7-11 = /ELSE - The /ENDIF compiler directive is used to end the /IF, /ELSEIF or /ELSE group.
Entry position of /ENDIF :
7-12 = /ENDIF
Example:
The below example contains the usage of /IF,/ENDIF and /DEFINE directives.
In the below example ,we have set a condition ON using /DEFINE which is DIVIDE and then using /IF checked if the DIVIDE condition is set ON, then we will add copy book CALCDIV, otherwise not.
After Compilation:
As in the /IF the DIVIDE is defined using /DEFINE means it is ON, so in our source listing compiler will add the copy book in the source.
H. /EOF
By using this directive, we are instructing the compiler to ignore any source lines that come after this directive.
Example:
In the below example we can see how we used /EOF and when we compile this the compiler will not compile the source after the line containing /EOF.
Limitations:
There are some points we need to take care of, while using the directives.
- The compiler directive statements must precede any compile-time array or table records, translation records, and alternate collating sequence records.
- No directive can be specified within a single free-form calculation statement.
- The /IF, /ELSEIF, /ELSE, and /ENDIF directives can be specified within a single free-form control, file, definition, or procedure statement. No other directives can be specified within these statements.
- Within a free-form statement, when a line begins with what appears to be a directive that is not allowed within that statement, it is interpreted as a slash followed by a name.
- The special directive **FREE can only appear in column 1 of the first line of the source.
File Handling
Types of files:
Multiple types of file exist in an IBM i series native filesystem that we can use in RPGLE programs/modules.
- Database files:
- It includes file that can contain data in tabular form.
- Physical and logical files are database files.
- In case of database file, the file type on F-Spec can be.
- I for using file in Input mode for reading records.
- O for using file in output mode for writing records to the file
- U for opening file in update mode for updating existing records.
- We can also use, file addition with input and update mode to add records to the file.
- In free-format, we use usage keyword to designate the mode in which it will be used in the program. The possible values are.
- *Input for using file in Input mode for reading records.
- *Output for using file in output mode for writing records to the file
- *Update for opening file in update mode for updating existing records.
- *Delete for deleting the records from the file.
- You can use *input with *output, to read and write records without update. It is same as file addition that we use in fixed format.
Below are some examples.
Fixed format
FPRODUCTS IF E DISK
Free format
dcl-f PRODUCTS usage(*input);
Fixed format
FPRODUCTS IF A E DISK
Free format
dcl-f PRODUCTS usage(*input:*output);
Fixed format
FPRODUCTS UF E DISK
Free format
dcl-f PRODUCTS usage(*delete);
- Workstation Files
- A WORKSTN file is an externally described display-device file that is used to display information and takes user input.
- The most used externally described WORKSTN file is a display file.
- The file type on F-Spec for WORKSTN file is C, means combined.
Below is an example.
Fixed format
FORDERDSPF CF E WORKSTN indds(IndDs) F sfile(SFLORD:S1RRN)
Free format
dcl-f ORDERDSPF workstn indds(IndDs) sfile(SFLORD:S1RRN) ;
- Printer files
- A printer file is a virtual file having data specifications used for later output to a printer.
- A printer file can only be used as output type and O needs to be specified in the F-Spec
Below is an example.
Fixed format
FREPORTDN O E PRINTER
Free format
dcl-f REPORTDN printer;
Multi-member Physical files:
A physical file can have multiple members. By default, when a PF is created it has only one member by the same name as the file. We can use the ADDPFM command to add a new member to the PF.
Below are a few useful keywords that help us in dealing with multi-member files easier.
- EXTMBR: This RPGLE keyword is used on an F-spec and describes which member of the file will be opened at program initialization. You can specify a member name, or ‘*ALL’, or ‘*FIRST'(default). The member-name should be in upper case. We can also use variable names for the member names but one of the below considerations should be kept in mind for the variable declaration.
- Use the INZ keyword to initialize the member-name on the D specification.
- Passing the value in as an entry parameter
- Using a program-global variable that is set by another module.
- Below is an example of the keyword with *ALL.
Fixed formatFPRODUCTS IF E DISK EXTMBR('*ALL')
Free format
dcl-f PRODUCTS usage(*input) extmbr(*ALL');
In the above example, all the members are read sequentially. In other words, when all the records from the first member have been read, the program will start reading then the records in the second member.
- Consider another example of a file FRUITS having multiple members. For each member we will have to read the file and do some processing. So, we can use the keyword with member-name as a literal.
Fixed formatFPRODUCTS IF E DISK EXTMBR('MANGO’)
Free format
dcl-f PRODUCTS usage(*input) extmbr(‘MANGO’);
- Sometimes, we have a use case where the member-name is not constant and is changed based upon some conditions. In that case, we can use a variable for the member-name.
Fixed formatFPRODUCTS IF E DISK EXTMBR(MBRNAME)
Free format
dcl-f PRODUCTS usage(*input) extmbr(MBRNAME);
As the file will be opened at program initialization, the MBRNAME must be populated in upper-case before that. You can declare the variable and initialise it with a default value or the variable can be received as an *entry parameter.
- There is another use-case where we have a dynamic member name, let’s suppose with today’s date. E.g OR20240101 is the member-name for a multi-member order file having members for each date.To read the member as per today’s date, we can use below logic. We use the USROPN keyword and initialize the variable later in our program, but, before doing an open on the file.
dcl-f PRODUCTS usage(*input) extmbr(MBRNAME) usropn; dcl-s MBRNAME char(10); MBRNAME = ‘OR’ + %char(%Date():*iso0); Open PRODUCTS; Setll *loval PRODUCTS; Read PRODUCTS; Dow not %eof(PRODUCTS); Validate(); Read PRODUCTS; Enddo; Close PRODUCTS; *inlr = *on;
- EXTFILE: This F-spec keyword is used to specify the file and library which will be opened at program initialization. Below are the possible values for the keyword. The values must be in upper-case.
- filename
- libname/filename
- *LIBL/filename
- *EXTDESC
Fixed format
Finput if f 10 disk extfile(‘MYLIB/IN2024’)
Free format
dcl-f input usage(*input) extfile(‘MYLIB/IN2024’);
If a variable name is used, it must be set before the file is opened. For files that are opened automatically during the initialization part of the cycle, the same considerations apply as EXTMBR.
The example above shows how you can call the file any name you would like, in this case, it is input. The EXTFILE tells the program where to find the file, the library name is optional. In this case, the file is in MYLIB. This could be considered a replacement for the CL command OVRDBF.
- EXTNAME: This F-Spec keyword is used with data structure declaration to fetch the field descriptions of the file specified. Below are the examples:
Fixed formatD recOrd E DS extname(‘ORDHDR’)
Free format
dcl-ds record extname(‘ORDHDR’);
File read Op-Codes:
- SETLL: SETLL sets the file pointer at the lower limit of the record entry where the key field/RRN value is greater than or equal to the factor-1 search argument value. After positioning the file pointer, we can go for any file operation e.g. READ, READP,READPE, READE which are discussed further down the line.
- In factor-1 we can use figurative constant *LOVAL, *HIVAL, *START, *END or we can use RRN VALUE, KEY VALUE, KEY LIST or KEY DS.
- To determine if the record having the key field/RRN value exactly same as the search argument value in factor-1 is found, we can use %EQUAL BIF.
Fixed format
C 'MONKEY' SETLL RECORD
Free format
Setll ‘MONKEY’ RECORD;
- SETGT: SETGT sets the file pointer at the higher limit of the record entry where the key field/RRN value is greater than the factor-1 search argument value. After positioning the file pointer, we can go for any file operation e.g. READ, READP,READPE, READE which are discussed further down the line.
- In factor-1 we can use figurative constant *LOVAL, *HIVAL, *START, *END or we can use RRN VALUE, KEY VALUE, KEY LIST or KEY DS.
- To determine if the record having the key field/RRN value exactly same as the search argument value in factor-1 is found, we can use %EQUAL BIF.
Fixed format
C 'MONKEY' SETGT RECORD
Free format
Setgt ‘MONKEY’ RECORD;
- READ: This opcode is used to read the records from a database file. The record is read based on the pointer set by SETXX opcodes, once the record is read it moves the pointer to next available record. This generally is used with *LOVAL/*HIVAL and SETXX opcodes. E.g. SETLL sets the file pointer at the first occurrence of the record where the key field/RRN value is greater than or equal to the factor-1 search argument value. After positioning the file pointer, we can go for any file operation like READ.
- We can also use a data structure as a resultant field to retrieve the values in it.
- If a file is declared in update mode, the read opcode takes an exclusive lock on the record which can cause issues if multiple programs are using the file at the same time. To circumvent this, we can use READ opcode with (n) extender, to read the file with no lock.
To monitor read opcode for error we can also use the (e) extender and check for errors on the next line using %error() built-in function.
Fixed format syntax:
Resulting Indicators Factor 1 Opcode Factor 2 Result Field HI LO EQ READ(N|E) File or record format name Data structure to hold the result Error End-of-file condition indicator Free format Syntax:
Read(n|e) file/record format [Data Structure]
- READE: This opcode is used to read an exact match of the key specified in the factor 1. If multiple records for an exact criterion are to be read, READE can be used with SETXX opcodes with factor 1.
- If the matching criteria is not found the EOF condition is reached.
- To handle exceptions, operation extender (e) can be used.
- READP: It is generally used to read the file in reverse order. READP moves the pointer to the previous record and reads the record and again moves the pointer to next previous position. If there are no more records it sets EOF indicator to *ON. It is usually used with SETLL and *HIVAL.
0001.00 FORDERS IF E K DISK 240105 0002.00 C *HIVAL SETLL ORDERS 240105 0003.00 C READP ORDERS 240105 0004.00 C DOW NOT %EOF() 240105 0005.00 C PNUM DSPLY 240105 0006.00 C READP ORDERS 240105 0007.00 C ENDDO 240105 0008.00 C SETON LR 240105
- READPE: This opcode is used to read an exact match of the key specified in the factor 1. If multiple records for an exact criterion are to be read, READPE can be used with SETXX opcodes with factor 1. Once a record is read it moves the pointer to next previous matching record for the key specified in factor 1.
- If the matching criteria is not found the EOF condition is reached.
- To handle exceptions, operation extender (e) can be used.
- READC: This opcode is used with subfiles, and it helps in identifying which subfile records have been modified. Exceptions can be handled by operation extender (e) and EOF indicator is set once the EOF condition is met. See below example where READC is being used to read changed record from subfile SFLORD.
0008.01 C READC SFLORD 0008.02 C DOW NOT %EOF 0008.03 C SELECT 0008.04 C ACTION WHENEQ '1' 0008.05 C EXSR HEADER 0008.06 C ACTION WHENEQ '2' 0008.07 C EXSR DETAIL 0008.08 C ACTION WHENEQ '4' 0008.09 C EXSR FOOTER 0008.10 C ACTION WHENEQ '5' 0008.11 C EXSR SUM 0008.12 C OTHER 0008.13 C EXSR VALIDATE 0008.14 C ENDSL 0008.15 C READC SFLORD 0008.16 C ENDDO
- CHAIN: This opcode is used to find an exact match for the value specified in factor 1. Under the covers, it is similar to SETLL and READE. The only difference is CHAIN cannot fetch the second exact match if used in a do-while loop. You can also use RRN number as factor 1 for sequential read if the file does not have any key defined.Operation extenders (n) and (e) can be used to for reading the record with no lock and error handling respectively.
0172.00 C EVAL KEYV = S_PNUM 0173.00 C KEYV CHAIN(E) HDRREC 0174.00 C IF %FOUND() 0175.00 C EVAL ORDBADD = S_ADDR 0176.00 C UPDATE HDRREC 0177.00 C ENDIF
Write Op-Code:
This opcode creates a new record in a database file or can be used with display files to output data on the screen. The opcode supports data structures as well. Below is an example for file ORDER with record format ORDERA.
Without Data structure
WRITE ORDERA;
With Data structure
WRITE ORDERA record;
Update Op-Code:
This opcode updates an existing record in a database. There should be a read operation prior to the update opcode. The opcode supports data structures as well. Below is an example for file ORDER with record format ORDERA.
Without Data structure
Update ORDERA;
With Data structure
Update ORDERA record;
Specific field update
Update ORDERA %fields(fld1:fld2:...);
Subroutines
A subroutine is a self-contained section of code within an RPG program that performs a specific task or set of tasks.
Subroutines in RPG are used to promote code reusability, modularity, and maintainability by encapsulating a particular functionality or calculation into a separate and callable unit.
RPG subroutines are defined using the EXSR (Execute Subroutine) operation code.
Syntax for Fix Format:
Exsr: It is used to call and process a subroutine.
Factor 1 | Code | Factor 2 | Result | Resulting indicator |
Exsr | Subroutine name |
Begsr: The op-code represents beginning of a subroutine placed in factor-1.
Factor 1 | Code | Factor 2 | Result | Resulting indicator |
Subroutine name | Begsr |
Endsr: ENDSR must be the last statement in the subroutine.
Factor 1 | Code | Factor 2 | Result | Resulting indicator |
Subroutine name | Endsr |
Syntax for Free Format:
Exsr: It is used to call and process a subroutine.
EXSR subroutine-name;
Begsr: The opcode represents beginning of a subroutine.
BEGSR subroutine-name;
Endsr: ENDSR must be the last statement in the subroutine.
ENDSR;
Usage:
Subroutines in RPGLE are used to encapsulate a specific piece of functionality within a program. Here’s how subroutines are typically used in RPGLE:
- 1. Modularity: Subroutines allow us to divide our RPGLE program into smaller, manageable units of code. Each subroutine can be responsible for a specific task or operation.
- 2. Reusability: Once we define a subroutine, we can call it multiple times from within our program, providing code reusability. This reduces code duplication and ensures that changes to a particular functionality only need to be made in one place.
- 3. Readability: Subroutines make our RPGLE code more readable and understandable by breaking it into smaller, well-named, and well-documented units.
- 4. Encapsulation: Subroutines can encapsulate complex operations or algorithms, making the main program more focused on the overall flow and logic of the application.
Restriction:
Subroutines can be restricted or limited in various ways based on the programming context and the features of RPGLE itself.
Below are some common restrictions and limitations on subroutines in RPGLE:
- 1. No Nested Subroutines: RPGLE does not support nested subroutines. This means you cannot define a subroutine within another subroutine. Subroutines are standalone and independent.
- 2. No Recursion: RPGLE doesn’t directly support recursion within subroutines, which means a subroutine cannot call itself directly or indirectly. Recursive calls are not allowed, as RPGLE does not have the necessary stack management for recursion.
- 3. No Local Variables: RPGLE subroutines do not have local variables or local storage.
- 4. Compile-Time Binding: In RPGLE, subroutine calls are typically resolved at compile time rather than at runtime. This means that if you change a subroutine, you often need to recompile all programs that call it.
- 5. No Explicit Return Statements: RPGLE subroutines do not require explicit “return” statements like some other languages. Control returns automatically to the calling program or procedure at the end of the subroutine.
- 6. Shared Memory Space: All variables declared within a program, including subroutines, share the same memory space. This means no true local variables exist within subroutines.
- 7. Propagation of unhandled exceptions: Unhandled exceptions within subroutines propagate to the calling program or higher-level exception handlers.
Best Practices:
- 1. Design subroutines with clear error handling strategies.
- 2. Use return codes and indicators effectively for error signalling.
- 3. Consider external procedures or sub procedures for more structured exception handling.
Example in Fix format:
Here’s a breakdown of the example:
- a) In first three line declare the variables.
- b) On 7th line firstly Execute the subroutine with EXSR it is the operation code used to execute the subroutine and then in the factor2 have subroutine name.
- c) Additionally, we will define the subroutine elsewhere in our program.
- d) On 8th line start the subroutine process with BEGSR. This is the operation code that marks the beginning of the subroutine.
- e) From 9th line to 12th subroutine logic code.
- f) On 13 line is end of the subroutine with ENDSR. This operation code marks the end of the subroutine.
Free format example:
Here’s a breakdown of the example:
- a) In first three line declare the variable.
- b) On 6th line firstly Execute the subroutine with EXSR and subroutine name.
- c) Additionally, we will define the subroutine elsewhere in our program.
- d) On 8th line start the subroutine process with BEGSR. This is the operation code that marks the beginning of the subroutine.
- e) From 9th line to 12th subroutine logic code.
- f) On 13 line is end of the subroutine with ENDSR. This operation code marks the end of the subroutine.
Subroutines in RPG are a way to modularize your code and make it more organized and readable. You can call a subroutine multiple times from different parts of your program, and it allows you to encapsulate and reuse specific logic.
Error Handling
Introduction
Exception handling in the AS400 system involves the process of gracefully managing and recovering from errors, exceptions, or abnormal conditions that may arise during program execution or system operations. It plays a critical role in ensuring the reliability, robustness, and integrity of applications running on the AS400.
RPG Exception Handling
RPG classifies exceptions into two main categories:
- Program Exception: Some examples of program exceptions are division by zero, array index out-of-bounds, SQRT of a negative number, invalid date, time, or timestamp value.
- File Exception: Some examples of file exceptions are undefined record type or a device error, record lock, update operation attempted without a prior read.
Status code
%STATUS code is a built-in function used to retrieve the status code of the most recent operation performed by operation codes within the program. This status code serves as an indicator of the success or failure of the operation. Typically, a status code of 0 signifies that the operation was completed successfully, while non-zero values indicate various types of error or exceptional conditions encountered during the operation.
The error is identified by a five-digit status code provided by %STATUS. Program status codes go between 00100 and 00999, whereas file status codes fall between 01000 and 01999. When a status code falls between 00000 and 00050, it is regarded as normal; that is, it is not caused by an exception or error situation.
There are different ways to indicate that RPG should handle an exception.
- (a) Using error IndicatorIf the calculation specification has an error indicator for an operation and an exception is expected for that operation:
- The indicator is set on.
- The exception is handled.
- Control resumes with the next RPG operation.
Sample code:
FEMPMASTER UF A E DISK USROPN C 5 SETLL EMPR 33 C EXSR INDERRSR C EVAL EMPNAME = 'ALEXA' C UPDATE EMPR 33 C EXSR INDERRSR C SETON LR C INDERRSR BEGSR C IF *IN33 = *ON C IF %STATUS(EMPMASTER) = 1211 C OPEN EMPMASTER C ELSEIF %STATUS(EMPMASTER) = 1221 C READ(E) EMPMASTER C EVAL EMPNAME = 'ALEXA' C UPDATE(E) EMPR C ENDIF C ENDIF C ENDSR
(b) Using Operator Extender (E)
If an ‘E’ operation code extender is included in the calculation specification and no error indicator is present, the error will be managed by this operator extender.
%STATUS and %ERROR, two built-in functions, will be used to handle the error.
Sample Code:
Dcl-F EmpMaster Usage(*Input:*Update:*Output) UsrOpn; Setll(E) 5 EmpR; Exsr ErrSr; EmpName = 'ABC'; Update(E) EmpR; Exsr ErrSr; *Inlr = *On; Begsr ErrSr; If %Error(); If %Status(EmpMaster) = 1211; Open EmpMaster; Elseif %Status(EmpMaster) = 1221; Read(E) EmpMaster; EmpName = 'ABC'; Update(E) EmpR; Endif; Endif; Endsr;
- Using Monitor BlockA MONITOR group performs conditional error handling based on the status code. It consists of:
- A Monitor Block
- Zero or more ON-ERROR blocks
- An ENDMON statement
Control moves on to the following statement after the MONITOR statement. The statements from the MONITOR statement to the first ON-ERROR statement make up the monitor block. Control is transferred to the relevant ON-ERROR block if an error arises during the processing of the monitor block.
If all the statements in the MONITOR block execute successfully without errors, control proceeds to the statement following the ENDMON.
Anywhere in the calculations, the monitor group can be provided. Within IF, DO, SELECT, or other monitor groups, it can be nested. Within monitor groups, the IF, DO, and SELECT groups can be nested.
Level indicators can be used on the MONITOR operation, to indicate that the MONITOR group is part of total calculations.
On the MONITOR statement, conditioning indicators are applicable. If they are not satisfied, control passes immediately to the statement following the ENDMON statement of the monitor group. Conditioning indicators cannot be used on ON-ERROR operations individually.
When a subprocedure called from a MONITOR block encounters an error, the subprocedure ‘s error handling will take precedence. For instance, the *PSSR subroutine within the sub procedure will be called. Only if the sub procedure is unable to handle the error and the call fails with the error-in-call status of 00202 will the MONITOR group containing the call be taken into consideration.
Errors that arise in a subroutine that is called from an EXSR statement within the monitor group are handled by the monitor group. The subroutine’s monitor groups take precedence if it has any.
Branching operations are not allowed within a MONITOR block but are allowed within an ON-ERROR block.
A monitor block’s LEAVE or ITER operations are applicable to all active DO groups that include the monitor block. For every subroutine, sub procedure, or procedure that contains a monitor block, the LEAVESR or RETURN operation is applicable.
A few examples are given below. The first involves figuring out how to capture a “divide by zero” error with the program status code 00102:
Sample Code:
Dcl-S Num1 Zoned(2:0); Dcl-S Num2 Zoned(2:0); Dcl-S Result Zoned(5:0); Dcl-S Error Char(50); Num1 = 10; Monitor; Result = Num1/Num2; On-Error 102; Result = 0; Error = 'Divide by 0'; EndMon; *Inlr = *On;
Example2:
Sample Code:
Dcl-F EmpMaster Usage(*Input); Dcl-S Error Char(20); Monitor; Open EmpMaster; On-Error *File; Error = 'File Not Opened'; EndMon; *Inlr = *On;
- Using an Error Subroutine(a) Using a File Error (INFSR) Subroutine.To handle a file error or exception, you can write a file error (INFSR) subroutine. When a file exception occurs:
- The INFDS is updated.
- A file error subroutine (INFSR) receives control if the exception occurs:
A file error subroutine can handle errors in more than one file.
The following restrictions apply:
- If an error occurs that is not related to the operation (for example, an array-index error on a CHAIN operation), then any INFSR error subroutine would be ignored. The error would be treated like any other program error.
- Control passes to the RPG default exception handler rather than the error subroutine handler if a file exception arises at the beginning or end of a program (for instance, on an implicit open at the beginning of the cycle). As such, there will be no processing of the file error subroutine.
- Errors in a global file used by a sub procedure cannot be handled by an INFSR.
Take the following actions to include a file error subroutine in your program:
- On a File Description specification, type the subroutine’s name after the keyword INFSR. The program error subroutine may be assigned control over the exception on this file if the subroutine name is *PSSR.
- You can use the keyword INFDS to optionally identify the file information data structure on a File Description specification.
- Enter a BEGSR operation in which the subroutine name specified for the keyword INFSR appears in the Factor 1 entry.
- Determine whether there is a return point and code it on the subroutine’s ENDSR operation.
- Code the rest of the file error subroutine. While any of the ILE RPG compiler operations can be used in the file error subroutine, it is not recommended that you use I/O operations to the same file that got the error. The ENDSR operation must be the last specification for the file error subroutine.
Sample Code:
Dcl-F EmpMaster Usage(*Input:*Update:*Output) INFDS(INFDS) Keyed INFSR(InfoSr) Usropn; Dcl-Ds InfDs; File_Status *Status; End-Ds; Dcl-S ReturnCd Char(6); Setll 00002 EmpR; EmpName = 'ABC'; Update EmpR; *Inlr = *On; Begsr InfoSr; If File_Status = 1211; Open EmpMaster; ReturnCd = '*GETIN'; Elseif File_Status = 1221; Read(E) EmpMaster; Update(E) EmpR; ReturnCd = '*CANCL'; Endif; Endsr ReturnCd;
(b) Using a Program Error Subroutine.
Program error subroutines (*PSSR) can be written to handle exceptions or program errors. When a program error occurs:
- The program status data structure is updated.
- If an indicator is not specified in positions 73 and 74 for the operation code, the error is handled, and control is transferred to the *PSSR.
After a file error, you can explicitly move control to a program error subroutine by adding *PSSR to the File Description specifications after the keyword INFSR.
For any procedure within the module, a *PSSR can be coded. Every *PSSR is specific to the coding procedure.
To add a *PSSR error subroutine to your program, you do the following steps:
- Optionally identify the program status data structure (PSDS) by specifying an S in position 23 of the definition specification.
- Enter a BEGSR operation with a Factor 1 entry of *PSSR.
- Identify a return point, if any, and code it on the ENDSR operation in the subroutine.
- Code the rest of the program error subroutine. Any of the ILE RPG compiler operations can be used in the program error subroutine. The ENDSR operation must be the last specification for the program error subroutine.
Sample Code:
Dcl-S Num1 Zoned(2:0); Dcl-s Num2 Zoned(2:0); Dcl-S Result Zoned(5:0); Dcl-S Error Char(20); Dcl-S ReturnCd Char(6); Dcl-Ds PSDS1 psds; Pgm_Status *Status; End-Ds; Num1 = 10; Result = Num1/Num2; *Inlr = *On; Begsr *Pssr; If Pgm_Status = 00102; Error = 'Divide by zero'; ReturnCd = ' '; Else; Error = 'Error with status code'; ReturnCd = '*CANCL'; Endif; Endsr ReturnCd;
- Default exception Handler.The RPG default error handler is called if there is no error indicator, ‘E’ extender, or error subroutine coded and no active MONITOR group could handle the exception.Sample Code:
Dcl-F EmpMaster Usage(*Input:*Update:*Output); Setll 5 EmpR; EmpName = 'ABC'; Update EmpR; *Inlr = *On;
CL Program Exception Handling
Monitor Message (MONMSG)
The monitor message (MONMSG) command enables us to take corrective action for escape, status, and notification messages that are present in a CL program at run time.
The messages are sent to the program message queue for the conditions specified in the command. If condition exists, the CL command specified on the MONMSG command runs.
It doesn’t handle diagnostic messages, but we can receive those messages from the message queue to get additional information related to the error.
Types of monitor message
- Escape MessageAn escape message alerts your program to an error that prompted the sender to terminate the program.
You can terminate your program or take corrective action by monitoring for escape messages. - Status or Notify MessageAn abnormal condition that is not severe enough for the sender to terminate is reported to your program via status and notification messages. By monitoring for status or notify message, your program can detect this condition and not allow the function to continue.Two levels of MONMSG command:
- Program levelIn the CL program, the MONMSG is defined right after the final declared command.
It will detect all error escape messages present in the program, regardless of whether there are any command level MONMSGs or not.Sample code:PGM DCLF FILE(EMPMASTER) OPNID(OPNID1) /* Program Level MONMSG */ MONMSG MSGID(CPF0000) EXEC(GOTO CMDLBL(ERROR)) CHKOBJ OBJ(*LIBL/EMPMASTER) OBJTYPE(*FILE) MBR(*FIRST) ERROR: SNDPGMMSG MSG('Object not found in the *LIBL') ENDPGM
- command level.Here the MONMSG command immediately follows a CL command. If there is any error at a particular CL statement and it satisfies the condition specified in MONMSG, then the error is caught with this MONMSG.Sample Code:
PGM DCLF FILE(EMPMASTER) OPNID(OPNID1) /* Command Level MONMSG */ CHKOBJ OBJ(*LIBL/EMPMASTER) OBJTYPE(*FILE) MBR(*FIRST) MONMSG MSGID(CPF9801) EXEC(GOTO CMDLBL(ERROR)) READ: RCVF OPNID(OPNID1) MONMSG MSGID(CPF0864) EXEC(GOTO CMDLBL(END)) GOTO READ ERROR: SNDPGMMSG MSG('Object not found in the *LIBL') END: ENDPGM
- Program levelIn the CL program, the MONMSG is defined right after the final declared command.
Load All Subfile
In this instance, the SFLSIZ indicates the number of total records can be loaded and SFLPAG indicates number of per page records in subfile.
The maximum value for SFLSIZ can be 9999.
In load all subfile, the system automatically handles PAGEUP and PAGEDOWN.
Usage:
Load all subfile program can be written in RPG, SQLRPG, RPGLE, SQLRPGLE.
We also need to create a display file (DSPF).
Restrictions and compatibility:
It can display maximum of 9999 records to the subfile, if more records need to be display then load all subfile is not compatible.
Code Example:
Physical file – EMPLOYEE
Column Names | Data Type | Length | Decimal | Description |
---|---|---|---|---|
EMPNO | Zoned Decimal | 10 | 0 | EMPLOYEE NUMBER |
EMPNAME | Character | 20 | EMPLOYEE NAME | |
EMPDEPT | Character | 10 | DEPARTMENT | |
EMPMOBNO | Zoned Decimal | 10 | 0 | MOBNO NO |
Display file – EMPLOYEED
In EMPSFL subfile record format, we defined the fields to be populated in the subfile.
In EMPCTL subfile control record format, we defined the required header information or header fields to be populated.
In EMPFTR record format, we defined footer information displayed below the subfile record format.
We used OVERLAY keyword in subfile control record format to overlay EMPFTR record format to the subfile.
In line 13.00, we defined SFLDSP keyword with indicator 51, this will be used in RPG program to display subfile record format.
In line 14.00, we defined SFLDSPCTL keyword with indicator 52, this will be used in RPG program to display subfile control record format.
In line 15.00, we defined SFLCLR keyword with indicator 53, this will be used in RPG program to clear the subfile before loading the subfile.
In line 16.00, we defined SFLEND keyword with indicator 54, this will be used in RPG program to display more if next page is there for the records and display bottom for the last page of subfile records.
RPGLE Program –EMPLOYEER (Free format)
Line 7.00 – decalaration of physical file EMPLOYEE
Line 8.00 – decalaration of display file with subfile record EMPSFL. We used keyword SFILE to reference subfile record relative number (RRN) for the subfile.
In this clearSubfile subroutine, we turned on SFLCLR indicator *IN53 to clear the subfile and writing subfile control record format.
In line 30.00, we initialize the record relative number (RRN) with 0
In this loadSubfile subroutine, we are reading the file EMPLOYEE from top to bottom and writing to subfile record format by increamenting subfile record relative number (RRN).
When SFLEND indicator *IN54 will be turned off, more will be shown at the bottom right of the subfile records.
When SFLEND indicator *IN54 will be turned on, bottom will be shown at the bottom right of the subfile records.
In displaySubfile subroutine, we are displaying the subfile by turning on SFLDSPCTL indicator *IN52.
SFLDSP indicator *IN51 will be turned on if records written to the subfile are greater than 1.
In line 67.00, we are writing the footer record format which will be overlayed to the subfile.
In line 68.00, we are displaying the subfile by using EXFMT keyword which is the combination of keywords WRITE and READ.
When F3 (*IN03) or F12 (*IN12) will be pressed, it will come out from the subfile.
RPGLE Program – EMPLOYEER (Fix format)
Calling program EMPLOYEER, displaying load all subfile with loaded data from physical file EMPLOYEE as shown below –
After pressing pagedown, shown below –
Expandable Subfile
Index
- Introduction
- Example
- Usage of Expanding Subfile
- Restrictions of Expanding Subfile
Introduction
Expandable Subfile is also referred to as an elastic or growing subfile because of its increasing nature. Unlike Load all subfiles where all 9999 records are loaded into the buffer in a single shot, the expandable subfiles load data into the buffer one page at a time.
Since data will be loaded into buffer only upon PAGEDOWN, this case needs to be handled. PAGEUP will be automatically handled by the system since data is already loaded into the buffer.
The basic ask for an Expandable Subfile while defining the DDS source is for the SFLSIZ to be at least one higher than the SFLPAG. The subfile buffer’s SFLSIZ is expanded to hold all records up to the buffer limit of 9999.
Example
Display File: CUSTDSPF
Main Program: CUSTMAIN
Output:
Uses of Expandable subfile
- When you want to show a lot of records but are unsure of the exact size ahead of time, expandable subfiles can be helpful.
- When showing client transactions, inventory goods, or staff data are among the situations where the quantity of records can differ greatly, they are frequently used.
- Always keep in mind that expandable subfiles are a useful tool for managing dynamic data in AS/400 programs because they offer flexibility and adaptability.
Restriction of Expandable Subfile
Performance: Due to the AS/400 system’s limited processing power and memory, expandable subfiles may experience performance problems with bigger datasets, even though they are effective for smaller datasets.
Single Page Subfile
Index
- Introduction
- Examples
- Usage of Single Page Subfile
- Restrictions of Single Page Subfile
Introduction
- In AS/400 programming, A single-page subfile shows all of the accessible data on a single screen/Page, in contrast to a typical subfile, which may span numerous pages and require paging controls to traverse through the data.
- The subfile page (SFLPAG) and subfile size (SFLSIZ) in this instance must match. This subfile is sometimes referred to as non-elastic, meaning that the buffer size will always be the same as the page size.
- The buffer is cleaned before writing each time a record is written there. Records the same as the size of SFLPAG are written after the subfile buffer has been cleared.
- PAGEUP and PAGEDOWN handling is necessary in this situation.
Example
Display File: DSPF
Main Program: CUSTMAIN
Output:
Usage of Single Page Subfile
In AS/400 programming, single page subfiles have significant benefits and are used in a variety of applications. The following are some typical applications for subfiles with one page:
- Data Inquiry Screens: When consumers need to access information fast without having to navigate through several pages, Single page subfiles are frequently employed. Customer information may be shown on a single page subfile, for instance, in a sales application’s customer search screen.
- Lookup Tables: These help show reference data or lookup tables that users need to see regularly. For easy access, a single page subfile can be used to display a product catalog in an inventory management system.
- Master Data Maintenance: Master data records, such as product or client information, may be displayed and edited using single page subfiles. Paging controls are not necessary because users may read and edit records on a single screen.
- Reporting: Single page subfiles can be used to present reports in scenarios when the dataset is small enough to fit on one screen. For ease of viewing, a daily sales report, for instance, may be presented as a Single page subfile.
- Dashboard Views: Dashboard views that give a summary of important metrics or performance indicators can be created using single page subfiles. All pertinent data is readily visible to users on a single screen.
- Workflow Management: Tasks or activities allocated to a user can be shown in single page subfiles inside workflow management apps. On a single screen, users may conveniently see and manage their responsibilities.
- Status Monitoring: They can be used to show information or provide real-time status updates. For simple monitoring, a monitoring program, for instance, can provide the current state of all system components on a single page subfile.
All things considered, single page subfiles in AS/400 programming provide a convenient means of presenting and interacting with data on a single screen, which makes them appropriate for a variety of uses in a variety of sectors.
Restrictions of Single Page Subfile
In AS/400 programming, single page subfiles provide several advantages, but there are also some constraints and limits to take into account.
- Limited Data Display: For showing comparatively modest datasets that easily fill one screen, single page subfiles are appropriate. It might not be feasible to use a single page subfile if the dataset is too big to fit on a single screen; instead, you might need to think about different pagination methods.
- Effect on Performance: When loading all of the data onto one screen, performance may suffer, particularly if there are complicated calculations or processing required, or if the dataset is big. Large dataset retrieval and formatting might tax system resources and degrade system performance.
- User Interface Clutter: When too much information is presented on one screen, the user interface may become crowded, making it challenging for users to locate the information they want. To guarantee clarity and usefulness, the screen layout must be properly designed.
- Limited Navigation: scrolling features for scrolling over several pages of data are not supported by single page subfiles. Users might not be able to see all of the data or move through it effectively if the dataset is larger than what fits on a single screen.
- Data Integrity: Data integrity may be compromised if entries are edited or updated directly on a single page subfile without first undergoing appropriate validation and error handling. Strong validation procedures must be used to guarantee the consistency and correctness of the data.
- Scalability: Single page subfiles become less effective as the dataset gets larger. If an application needs to handle progressively larger datasets over time, scalability could become a problem.
- Restricted ability: Depending on user settings or system circumstances, single page subfiles could not have the ability to dynamically alter the display or arrangement of data. Programming code modifications could be necessary to add or remove fields from the display.
Single page subfiles can nevertheless be a useful tool for presenting and engaging with tiny datasets in some applications, despite these limitations. However, while choosing whether to use single page subfiles in AS/400 programming, it’s crucial to carefully analyze the trade-offs in design and related constraints.
Printer Files
There are two different types of printer files:
- Program defined printer file
- Externally described printer file
1. Program defined printer file
A program described printer file is a printer file that is defined inside an application program. This indicates that the data division has internally defined descriptions for the file, record, and field. This approach involves hard-coding report specifications into the program, which are then included in the compiled object of the program.
Program:
Sample Code:
/Free Dcl-F SalAcc Usage(*input:*output) keyed; Dcl-F QPrint Printer(132) Usage(*output) Oflind(*In90); Except Header; Setll *Loval SalAcc; Read(n) SalAcc; Dow Not %Eof(SalAcc); If *In90 = *On; Except Header; Endif; Except Detail; Read(n) SalAcc; Enddo; Except Footer; *Inlr = *On; /End-Free OQPRINT E HEADER O 6 'PAGE' O Page 10 O 47 'SALARY ACCOUNT REPORT' O 65 'DATE' O Udate Y 75 O E HEADER 1 O 08 'EMPID' O 25 'DEPTCODE' O 40 'ACCOUNT NO' O 65 'SALARY STATUS' O E DETAIL 1 O EMPID 10 O DEPTCODE 22 O ACCOUNT_NO42 O STATUS 62 O E FOOTER 1 O 42 '******END OF REPORT******'
The program described printer file layout in O-SPECS shows how the records and their fields are supposed to print in the printer file.
In O-specs Press F4 define the QPRINT as an internal printer file and define a number of record formats. Defined Type E, by using the EXCEPT opcode as E(Exception) in O-specs, all record formats would be printed. First, we used the HEADER record format as EXCEPTNAME.
In O-specs assign the text “Page”, “SALARY ACCOUNT REPORT”, and “DATE” which will end at position 6, 47, and 65 respectively.
Defined the PAGE as field name at end position 10. Used PAGE system value to automatically set the page number for the printer file
we used the UDATE to get the six-digit date (mmddyy) and the edit code Y to format it using MM/DD/YY. Which will end at position 75.
The second record format is HEADER again, and it will only print the columns that contain “EMPID,” “DEPTCODE,” “ACCOUNT NO,” and “SALARY STATUS” at the end position that is specified. 1 is defined with HEADER to add one space before printing the second header record format.
We directly referred to the physical file fields EMPID, DEPTCODE, ACCOUNT_NO, and STATUS to be printed in the third record format, DETAIL, also 1 is defined with DETAIL which allows for one space before printing.
After printing all the details, we print the end of the report using the fourth record format, called FOOTER. 1 is defined with FOOTER to add one space before printing the second header record format.
Result
Advantages of Program-Defined Printer Files:
- The program is easy to maintain because the printer file specifications are directly embedded within it.
- During compilation, program and printer file changes are synchronized.
- There are no external dependencies because the printer file and the program are self-contained.
2. Externally described printer file
Any program that uses printer files that contain report specifications can define them externally. This indicates that a printer file’s report specifications are presented independently of any programs and combined into a printer file object.
Two ways to design an Externally described printer file:
- Design Externally described printer file using STRSEU.Create Source Member with type PRTF using STRSEU command.
This DDS entry’s designed screen is displayed below. Against this DDS source member, we choose option-19 in order to view the designed screen. - Design Externally described printer file using STRRLU:Step 1: Write STRRLU on the Command line then Press F4. Fill the Source Physical file, library, and source member name DEMOPRTF and press enter.Step 2: Insert line using then DR to define record format then VF to view field and press enter.
Step 3: On FLD1 write ‘SALARY ACCOUNT REPORT’ and press enter.
Step 4: Repeat step (2) to add more record format and field.
Step 5: Similarly, On FLD1 of RCD002 define the column as ‘EMPID’, ‘DEPTCODE’, ‘ACCOUNT NO’, and ‘SALARY STATUS’ and then press enter.
Step 6: On FLD1 Press F10 and give option 1 to add the fields from the database file SALACC and press enter. The selected field will show up at the bottom of the screen. Set the cursor at the FLD1 line where you want to add the field and Press Enter. The field definition will be placed there.
Step 7: Repeat step (2) and then Press F11 to define the field on FLD1 of RCD004 record format.
Step 8: Press SHIFT F6 + F10 to rename the record format. Rename RCD001 = HEADER, RCD002 = HEADER1, RCD003 = DETAIL, and RCD004 = FOOTER.
Example:
Program:
Sample Code:
Dcl-F SalAcc Usage(*input:*output) keyed; Dcl-F SalPrtf@ Printer Usage(*output) Oflind(*In90); Write Header1; Write Header2; Setll *Loval SalAcc; Read SalAcc; Dow Not %Eof(SalAcc); If *In90 = *On; Reset *IN90; Endif; Write Detail; Read SalAcc; Enddo; Write Footer; *Inlr = *On;
Result:
Advantages of Externally Described Printer Files:
- It’s not necessary to recompile the programs that use the printer file whenever changes are made. Such flexibility is very helpful when you need to change report layouts without having an impact on currently running programs.
- You can achieve better modularity and maintainability by separating report specifications from program logic.
- The same printer file can be shared by multiple programs, reducing redundancy and providing consistency.
- Data Description Specifications (DDS) for printer files with external descriptions can be created and reports can be generated using software tools.
Embedded SQL
Introduction
Embedded SQL programming in IBM i, empowers developers to seamlessly integrate SQL statements within RPG programs. This integration allows for efficient database interaction and manipulation tasks directly within the AS/400 environment.
Type of Embedded SQL
- Static SQL
- Static SQL involves SQL statements that are directly embedded within the source code of programs during compilation time.
- These statements cannot be changed or modified during runtime.
- Static SQL is suitable for scenarios where the SQL queries are known at compile time and do not need to be dynamically generated based on user input or other runtime conditions.
- Dynamic SQL
- Dynamic SQL allows for the generation and execution of SQL statements during runtime.
- Unlike static SQL, dynamic SQL statements can be constructed dynamically within the program based on runtime conditions, user input, or other variables.
- Dynamic SQL provides greater flexibility and versatility, as it enables programs to adapt to changing requirements or conditions at runtime.
Compilation command
CRTSQLRPG – To Create SQL RPG Program
CRTSQLRPGI – To Create SQL ILE RPG Object
Compilation Process
- Compared to a typical RPG application, embedded SQL requires a different compilation process.
- There are two sections to the compilation:
- SQL Pre-compilation: To verify the embedded SQL in the program and convert those into dynamic program calls. When a host variable, SQL statement selection field, or other SQL statement-related error is found, the compilation process ends, and a SQL pre-compilation report is produced.
- Main Program Compilation: Only the main program is built, and a successful compilation report is produced if there are no errors in the SQL pre-compilation.
Host Variable
The values that are retrieved by your program are put into data items such as Standalone variables/arrays/data structures/indicators that are defined by your program and that are indicated by a SELECT INTO or FETCH statement’s INTO clause. The data items are called host variables.
In SQL, a host variable refers to a field(Standalone variables/arrays/data structures/indicators) in your program that you specify within an SQL statement. Typically, it serves as the source or target for the value of a column. The host variable and the corresponding column must have compatible data types. However, it’s important to note that host variables cannot be used to identify SQL objects like tables or views, except in the context of the DESCRIBE TABLE statement
Note: When you utilize a host variable instead of a literal value in an SQL statement, you provide the application program with the flexibility to process various rows in a table or view.
- In a WHERE clause: Host variables allow you to define a value in the predicate of a search condition or to substitute a literal value within an expression. For example, in SQLRPGLE:
wkEmpID = ; exec sql Select empname into :wkEmpName from empMaster where empid = :wkEmpID;
- As a receiving area for column values (named in an INTO clause: When working with SQL, host variables allow you to define a program data area that will hold the column values of a retrieved row. The INTO clause specifies one or more host variables where you want to store the column values returned by the SQL query. This flexibility enables dynamic handling of data within your database operations. For example:
dcl-s wkEmpID char(6) inz; dcl-s wkEmpName char(50) inz; exec sql Select empid, empname into :wkEmpID, :wkEmpName from empMaster where empDept = ‘IT’ fetch first row only;
OR
dcl-ds empds dim(200) qualified; wkempID char(6) inz; wkempSal packed(11:2) inz; end-ds; exec sql Select empid, empSalary into :empDS from empMaster where empDept = ‘IT’ fetch first 200 rows only;
SQL Cursor
In IBM i, SQL cursors are essential constructs used to handle the result set returned by SQL queries within embedded SQL statements. A cursor allows programs to iterate over the rows of a result set sequentially, enabling row-level processing and manipulation of data retrieved from the database.
Creation Steps of Cursor:
- Prepare SQL statement (Optional)
dcl-ds empData extname(‘EMPMASTER’) qualified; end-ds; SQLstring = ‘Select * from empMaster where empid = ’ + wkEmpId; exec sql prepare SQLstmt from :SQLstring;
- Declare the Cursor
exec sql declare emp cursor for SQLstmt;
- Open the Cursor
exec sql open emp;
- Fetch from Cursor
exec sql fetch from emp into :empData; dow sqlcode =0; {logic block} exec sql fetch from emp into :empData; enddo;
- After all the records have been fetched, close the Cursor
exec sql close emp;
Type of SQL Cursor:
- Positioning based Cursor
Placing the cursor dynamically or sequentially at the resulting table rows divides the cursor into two types.- Serial/Sequential Cursor
- Scrollable Cursor
- Data-reflection-based CursorAfter opening the specific cursor, the modified data reflection into the cursor result table separates the cursor into two types.
- Sensitive Cursor
- Insensitive Cursor
Fetch for Rows:
To use the multiple-row FETCH statement with the host data structure array, the program must define a host data structure array that can be used by SQL.
- Number of Looping can be lowered down when you can fetch multiple rows at once.
- Declare your data structure with a dimension at the beginning.
- Declaring, opening, and closing the cursor is still necessary
- Fetch the number of rows equal to that data structure array size instead of a loop.
dcl-ds empData extname(‘EMPMASTER’) qualified dim(100); end-ds; dcl-s maxRows zoned(3) inz(100); exec sql declare getData cursor for select * from empMaster; exec sql open getData; exec sql fetch first from getData for :maxRows into :empData; rc = SQLER3; //SQLERR3 gives count of impacted rows from sql dow rc > 0; for i = 1 to rc; {logic block} endfor; exec sql fetch next from getData for :maxRows into :empData; rc = SQLER3; enddo;
Prepare SQL Statement
The PREPARE statement in the AS400 system is a powerful tool used by application programs to dynamically prepare SQL statements for execution.
- The PREPARE statement creates an executable SQL statement, known as a prepared statement, from a character string form of the statement called a statement string.
- It allows you to dynamically construct SQL statements at runtime, which is particularly useful when you need to parameterize your queries or execute dynamic SQL.
- Essentially, it prepares an SQL statement for later execution.
Execute Immediate SQL Statement
The EXECUTE IMMEDIATE statement offers a dynamic approach to executing SQL statements within a program. Unlike prepared SQL statements, which are pre-compiled and parameter-bound before execution, EXECUTE IMMEDIATE enables the execution of dynamically constructed SQL statements at runtime.
- The EXECUTE IMMEDIATE statement accepts a character string containing the SQL statement to be executed. This string can be dynamically constructed within the program.
- EXECUTE IMMEDIATE is particularly useful in scenarios where the structure or content of SQL statements cannot be determined statically at compile time.
Error Handling Indicator
SQLCODE:
- SQLCODE is a variable that stores the result code of the most recently executed SQL statement within an embedded SQL program.
- SQLCODE indicates the outcome of an SQL operation. Different values have different significance for execution of the statement. Most common values are 0 and 100 ( 0 indicates successful execution; 100 indicates end of records/no record impacted)and negative values indicate errors.
- After executing an SQL statement, check the value of SQLCODE to determine the outcome of the operation. Based on the result, appropriate actions can be taken.
SQLSTATE:
- SQLSTATE is a character variable that stores a five-character SQL state code representing the outcome of the most recent SQL operation.
- It offers additional details regarding the type of error or warning encountered during the execution of an SQL statement.
In IBM i embedded SQL programs, developers commonly utilize SQLCODE and SQLSTATE to identify and manage errors. This practice enables robust error handling and effective exception management within their applications.
Usage
- The dynamic usage of files within a program without needing F-specs.
- Data Retrieval and Manipulation
- Performance Optimization
- Data Integrity and Security
- Transaction Management
- Full SQL Capabilities
- Error Handling and Diagnostics
Tables and Arrays
ARRAY –
The array is a collection of elements having the same data type and length.
In RPG, we use ‘DIM’ keyword to define array.
There are 3 types of arrays in RPGLE –
- Run time array
- Compile time array
- Pre-runtime array
1. Run time array –
In Run time array, values will be filled to array during the run time only.
If any value is already assigned to array index, it can also be changed.
Fixed format example –
Line 1: Run time array name arr1 is defined with dimension 15 and length 10 having character data type for each element in array.
Line 2: A variable name index is defined with length (2,0) having zoned (numeric) data type.
Line 4: Assigning value to 1st index of array.
Line 5: Assigning value to 2nd index of array.
Line 6: Assigning index variable with value 3.
Line 7: Assigning value to 3rd index of array with the use of variable name index.
Free format example –
2. Compile time array –
In compile time array, values will be filled to array while compiling the program.
Values in compile time array cannot be changed during run time, values will be static in this.
Fix format example –
Line 1: Compile time array arr1 is defined with dimension 5 and length 20 having character data type.
The Keyword CTDATA is used to represent compile time array.
The keyword PERRCD is used to represent number of element values in each row.
Total number of elements (dim) = PERRCD elements * Number of rows.
Line 11: ‘**’ should be at position 1, after this we can give any readable name like (CTDATA arr1)
Line 12: Compile time array value for 1st index, as PERRCD is ‘1’ so this complete line will be assigned to 1st index.
Line 13: Compile time array value for 2nd index.
Line 14,15,16: Compile time array for 3rd, 4th, 5th index.
Once the program is compiled, all the values will be filled to array name arr1.
Line 5,6,7,8: Array values are used directly as array is filled while compile time.
Free format example:
3. Pre-runtime array –
The compile time array has some restrictions to change the values of array, if we want to change array value then we will need to make changes in program and recompile the program.
In pre-runtime array, we maintain the array elements in a separate file, if we want to change array element then we can change values in file, there is no need to recompile the program.
As array elements will be filled from the separate file so it will be filled while calling the program.
A flat file is a physical file that has a record length and no DDS (Data Description Specification).
Below is the command to create a flat file:
CRTPF FILE(YASHDEV/FLATF1) RCDLEN(20)
RCDLEN means, each record in file can be of length 20.
Below are the values added to the file for test data:
Fix format example:
Line 1: flat file FLATF1 is defined with file type as ‘I’ (input).
File Designation as ‘T’ to indicate an array or table file.
File Format as ‘F’ to indicate a program-described file.
Record Length as ’20’ to use length ’20’ for array element length.
Line 2: Pre-runtime array arr1 is defined with dimension 10 and length 20 having character data type.
Keyword FROMFILE is used to represent array to fill values from FLATF1 file.
Keyword PERRCD is used to represent the number of element values in each record of file.
While calling this program, it will fill the array from file FLATF1, and we can use the array directly.
Line 6,7,8,9: Array values are used directly without any array assignment.
arr(1) will be having value ‘Test value 1’ as per 1st record in file FLATF1 and PERRCD keyword with value ‘1’ for one element in one record.
Note: File Designation ‘T’ is not supported in fully free RPG format.
Tables –
In IBMi, tables are files which contain data in structural format.
There are 2 type of files –
- Physical files
- Logical files
1. Physical files –
Physical files include instructions on how to provide or receive data from a program in addition to the actual data that is kept on the system. They have one or more members and only one record format. Database files may contain externally or program-described records.
Physical files type is ‘*FILE’ and attribute is ‘PF’.
There can be multiple columns and keys in a physical file.
Creating a dds source of physical file: –
- We can create a physical file member dds by command ‘STRSEU’ and press F4 for prompt, below screen will be displayed to give details.
Source file – Source physical file name where we need to create a physical file.Library – Library name in which source file exists.
Source member – Physical file member name.
Source type – It should be ‘PF’ for physical file.Option – It can be blank for default. There are multiple option values –
2=Edit a member
5=Browse a member
6=Print MemberText description – It can be blank for default value. Also, we can give any text for our readability purposes.
Below is the dds source for an example physical file EMPPF-
Line 1: This is a file level keyword ‘UNIQUE’, which is to allow only unique records in this file as per defined key.
Keyword entries are optional in physical file, we can use as per our requirements.
There are 5 columns (EMPNO, EMPNAME, EMPGENDER, EMPEMAIL, EMPDEPT) defined in this physical file example.
There are ‘COLHDG’ keywords at field level, which is useful for readable name of columns.
There is a key EMPNO defined on line 8, which is useful to read the records from this physical file. We can give multiple keys in a physical file as per our requirements.
There are multiple levels for keyword entry –
- File level entries
- Record format level entries
- Field level entries
- Key field level entries
File level entries – file level entries work on entire file.
Below are the file level entries –
UNIQUE – It indicates that duplicate key values are not allowed
FIFO – It arranges duplicate key values in first-in, first-out order.
LIFO – It arranges duplicate key values in last-in, first-out order.
FCFO – It arranges duplicate key values in first-changed, first-out order
Record format level entries – It work for the defined record format.
Below are the record format level entries –
FORMAT – it shares field descriptions with an existing record format.
Below is the format of this keyword –
FORMAT(LIBNAME/FILENAME)
TEXT – It provides a description of the record or field.
Below is the format of this keyword –
TEXT(‘record format description’)
EDTCDE – It specifies an edit code (for reference function only).
EDTWRD – It provides an edit word (for reference function only).
REFFLD – It copies the field description from the referenced field.
REFSHIFT – It specifies a keyboard shift (for reference function only).
TEXT – It provides a description of the record or field.
TIMEFMT – It specifies the format of a TIME field.
TIMESEP – It specifies the separator used in the formatted TIME field.
VALUES – It provides a list of valid values (for reference function only).
VARLEN – It defines the field as a variable-length field.
Key field level entries – It work for the keys defined in physical file.
Below are the key field level entries –
DESCEND – It arranges records from the highest to the lowest key field value.
SIGNED – It arranges records using the sign portion of the key value.
ABSVAL – It arranges records using the absolute value of the key value.
UNSIGNED – It arranges records without using the sign portion of the key value.
ZONE – It arranges records using only the zone portion of the key value.
NOALTSEQ – It indicates to ignore any alternative collating sequence.
DIGIT – It arranges records using only the digit portion of the key value.
Below is the example (EMPPF1) of using REFFLD keyword –
Here, EMPNO field of EMPPF file is referenced to EMPNBR.
EMPNBR has the same data type and length as EMPNO field of EMPPF file.
We can ignore giving filename to define each field by using REF keyword at file level.
Below is the example of using REF keyword –
Creation of physical file object (Compile physical file)-
‘CRTPF’ is the command to compile physical file member. Type ‘CRTPF’ on command line and press F4 for prompt –
File – Object name for physical file.
Library – Object library in which physical file to be created.
Source file – Source file name in which physical file member is present.
Library – Library name in which source file is present.
Source member – Physical file member name.
Record length – It is used for flat file, it should be blank to compile physical file member.
Or we can use below command to create an object for physical file –
CRTPF FILE(&OBJLIB/&OBJNAME) SRCFILE(&SRCLIB/&SRCFILE) SRCMBR(&SRCMBR)
&OBJLIB – Object library in which physical file to be created.
&OBJNAME – Object name for physical file.
&SRCLIB – Library name in which source file is present.
&SRCFILE – Source file name in which physical file member is present.
&SRCMBR – Physical file member name.
CHGPF Command – The Change Physical File command changes the attributes of a physical file and all members of physical file. The changed attributes are used for all members subsequently added to the file unless other values are specified or default for the add operation.
Change Physical File Member (CHGPFM) command is used to change the attributes of a specific member.
Below is the CHGPF command –
CHGPF FILE(&OBJLIB/&OBJNAME) SRCFILE(&SRCLIB/&SRCFILE) SRCMBR(&SRCMBR)
&OBJLIB – Object library in which physical file to be created.
&OBJNAME – Object name for physical file.
&SRCLIB – Library name in which source file is present.
&SRCFILE – Source file name in which physical file member is present.
&SRCMBR – Physical file member name.
Other commands –
DSPFD – The Display File Description (DSPFD) command shows one or more types of information retrieved from the file descriptions of one or more database and/or device files.
Below is the DSPFD command to see all details of a physical file object –
DSPFD FILE(&FILELIB/&FILENAME)
&FILELIB – Physical file object library
&FILENAME – Physical file object name
DSPFFD – The Display File Field Description (DSPFFD) command shows, prints, or places in a database file field-level information for one or more files in a specific library or all the libraries to which the user has access.
Below is the DSPFFD command to see all details of a physical file fields –
DSPFFD FILE(&FILELIB/&FILENAME)
DSPDBR – The Display Database Relations (DSPDBR) command provides relational information about database files.
Below is the DSPFFD command to see all details of a physical file database relations –
DSPDBR FILE(&FILELIB/&FILENAME)
Logical files –
In AS/400, logical files are used to provide alternate views of physical files by specifying a different record sequence, selecting specific records, or reordering fields. Here’s a brief overview of the content for logical files:
Record Format Definitions:
Define the record format(s) that the logical file will use. These formats are typically based on the physical file’s record format but can include selected fields or reorganized data.
Key Field Definitions:
Specify key fields for the logical file. These fields determine the order of records in the logical file. The keys can include fields from one or more record formats.
File Type and Attributes:
Indicate the type of logical file (e.g., keyed or arrival sequence) and set attributes such as whether it’s updateable or read-only.
Select/OMIT Conditions:
Define conditions to selectively include or exclude records from the logical file based on specific criteria. This enhances data retrieval efficiency.
Join Logical Files:
If necessary, define join logical files that combine records from multiple physical files based on specified key relationships.
Access Paths:
Specify access paths for the logical file, which can include single-level or multi-level indexes. This helps optimize data retrieval operations.
Override Capabilities:
Utilize override capabilities to customize the behavior of the logical file, such as field renaming, data type conversion, or default values.
Example dds source for non-join logical file –
Line 1: we have used same record format name of physical file EMPPF.
PFILE is the keyword which indicates the logical file is based on EMPPF physical file.
Line 2,3: These are 2 fields which we select from physical file.
For no fields defined in logical file show all the fields.
Line 4: There is 1 key (EMPNO) defined for this logical file.
Line 5: This is definition of omit criteria, this logical file will omit the data in which EMPNO is greater than 30000.
Line 6: This is definition of select criteria, this logical file will select the data in which EMPNO is greater than 20000.
Join-Logical files –
In AS/400, join logical files are used to combine records from multiple physical files based on specified key relationships.
Here’s a basic overview of how you can create join logical files to join EMPPF and EMPPF3:
Below is the dds source for EMPPF physical file –
Below is the dds source for EMPPF3 physical file –
Below is the example dds source for EMPLF1 join logical file.
Line 1: There is a record format name defined EMPLFR and JFILE keyword is defined to join EMPPF and EMPPF3 physical files.
Line 2: There is a JOIN keyword which indicates the sequence of files.
Line 3: There is a JFLD keyword which indicates the fields of both files to join.
Line 4,5: There are EMPNO and EMPNAME fields are from EMPPF physical file.
Line 6,7: There are EMPADDR and EMPMOBNO fields are from EMPPF3 physical file.
Below is the command to compile logical file –
CRTLF FILE(&FILELIB/&FILENAME)
&FILELIB – Logical file object library
&FILENAME – Logical file object name
SQL Equivalent global temporary tables –
Global temporary tables are created in QTEMP library which is different for each session.
It will be used for current application process and cannot be shared with other application processes.
It can be used in SQLRPGLE program or executable interactively.
It can be used to eliminate the use of array or array ds in our application program by creating global temporary tables, writing data at run time and use as per program flow.
Below is the statement of global temporary table –
DECLARE GLOBAL TEMPORARY TABLE TEMP_EMP (EMPNBR CHAR(6) NOT NULL, EMPSAL DECIMAL(9, 2), EMPBONUS DECIMAL(9, 2), EMPDEPT CHAR(10)) ON COMMIT PRESERVE ROWS NOT LOGGED RCDFMT TEMP_EMPR;
In above SQL statement, a temporary table will be created in QTEMP library when declare global temporary table statement will be executed.
DECLARE GLOBAL TEMPORARY TABLE is the syntax to create TEMP_EMP table in QTEMP library.
There are 4 columns (EMPNBR, EMPSAL, EMPBONUS, EMPDEPT) defined.
ON COMMIT PRESERVE ROWS indicates that all rows of the table are preserved.
NOT LOGGED indicates that there will be no logs when changes are made to this table.
There is a record format TEMP_EMPR defined using RCDFMT.
We can declare a temporary table by using another table as below.
DECLARE GLOBAL TEMPORARY TABLE TEMP_EMP_1 LIKE LIBNAME/FILENAME ON COMMIT PRESERVE ROWS NOT LOGGED RCDFMT TEMP_EMPR1;
In above SQL statement, we have used LIKE keyword which indicates that this table has all the fields of FILENAME file which is present in LIBNAME library.
Below is the example of creating a global temporary table TEMP_EMP_2 having 3 columns (EMPNBR, EMPNAME, EMPSAL) from FILENAME file which is present in LIBNAME library and data for EMPNBR greater than 1000.
DECLARE GLOBAL TEMPORARY TABLE TEMP_EMP_2 AS (SELECT EMPNBR, EMPNAME, EMPSAL FROM LIBNAME/FILENAME WHERE EMPNBR > 1000) WITH DATA LIKE LIBNAME/FILENAME ON COMMIT PRESERVE ROWS NOT LOGGED RCDFMT TEMP_EMPR2;
Data Structure
Free Format
Fixed Format
Types of Data Structures in RPG
- Externally described Data Structures
- Multiple occurrence Data Structures
- Data area Data Structures
- Qualified Data Structures
- File information Data Structures
- Indicator Data Structures
- Program status Data Structures
- Externally described Data StructuresAn externally described data structure in RPG is a data structure whose definition is stored in an external file. This allows you to define the data structure once and then use it in multiple programs.
To define an externally described data structure, you can use the DCL-DS operation code and the EXT or EXTNAME keyword.
The EXT keyword specifies that the data structure definition is stored in an external file. The EXTNAME keyword specifies the name of the external file.
Here, we have used the externally described PF file (CUST).
Fixed format
Free format
- Multiple occurrence Data Structures A multiple occurrence data structure in RPG AS/400 is a data structure that can contain multiple occurrences of the same data. This can be useful for storing data that is repeated, such as a list of items in a purchase order or a table of data from a database.
To define a Multiple Occurrence data structure in RPG AS/400, you use the OCCURS keyword. The OCCURS keyword specifies the number of occurrences of the data structure that can exist.
Fixed format
Free format
Code example
- Data area Data Structures A data area data structure in RPG is a data structure that is defined in a program and is associated with a data area. Data area data structures are automatically read in and locked at program initialization time, and the contents of the data structure are written to the data area when the program ends with LR on.
Command to create a Data Area – CRTDTAARA
Command to display the Data Area – DSPDTAARA
Fixed format
Free Format
Code example
- Qualified Data StructuresThe qualified data structure is a data structure which allows us to define same fields into two or more data structures. A qualified data structure is used to group related data items together. Each data item within the qualified data structure has a qualified name, which includes the data structure name and the field name separated by a dot known as period.
Fixed format syntax
Free format syntax
Code example
- File information Data StructuresA file information data structure (INFDS) in RPG is a data structure that contains information about a file, such as the file status, file feedback, and input/output feedback. INFDS are used by RPG programs to handle file exceptions and errors.
Fixed format
Free format
Code example
- Indicator Data StructuresAn indicator data structure in RPG is a data structure that is used to store conditioning and response indicators for a workstation/display file (DSPF) and the printer file (PRTF). Indicator data structures are defined using the DCL-DS operation code and the INDDS keyword. The INDDS keyword specifies the name of the file that the indicator data structure is associated with.
INDARA keyword is declared in display file when we want to use an Indicator data structure.
Fixed format
Free format
- Program status Data StructuresA program status data structure (PSDS) in RPG is a data structure that contains information about the status of a program, including any errors that have occurred. The PSDS is defined in the main source section of the program and is accessible to all modules in the program. Only one PSDS is allowed per module.
Here, is the list of all the program status information.
Fixed format
Free format
Usage
- Group fields: Data structure can be used to group fields and can make a single string.
- Break fields into subfields: it can be used to break a string into several different fields.
Restrictions
- Fixed Format: RPG traditionally used fixed-format source code, which means that columns are predefined for specific purposes. This can limit the flexibility in defining data structures.
- Maximum Number of Elements: RPG has limits on the number of elements (fields) you can define in a data structure. The maximum number of elements varies depending on the RPG version and the specific compiler, but it’s typically around 99 elements.
- Field Names: Field names in RPG data structures must adhere to specific naming conventions. For example, they are limited to 10 characters in fixed-format RPG, and no limit in free-format RPG but the most preferred is 10 characters.
- No Dynamic Data Structures: RPG typically does not support dynamic data structures, such as linked lists or dynamically allocated arrays. Data structures are typically defined statically at compile time.
- Data Structure Alignment: RPG data structures may be subject to alignment requirements, which can affect the storage layout and padding of fields within a structure.
Integrated Language Environment
Introduction to ILE
ILE, or Integrated Language Environment, is a programming environment on AS/400 (now known as IBM i) designed to enhance flexibility and modularity in software development. Introduced to IBM systems in the 1990s, ILE supports multiple programming languages like RPG, COBOL, and C, allowing developers to create modular, reusable components. This environment promotes a structured approach to programming, fostering better organization and maintenance of applications on the IBM i platform.
Key Features
- Multi-Language Support: ILE accommodates various programming languages such as RPG (Report Program Generator), COBOL, and C. It empowers developers to choose the language that best suits their application requirements while maintaining inter-operability.
- Modular Design: ILE has its emphasis on modularity. Developers can create reusable modules, and integrate them as service programs, which contribute to a more efficient and maintainable codebase. This modular approach facilitates code sharing and enhances collaboration among developers.
- Service Programs and Binding: ILE’s service programs encapsulate logical units of code, fostering a modular structure. The binding process connects these service programs to form a cohesive application. This binding mechanism ensures that changes in one module do not necessitate a complete recompilation of the entire application.
- Procedure-Oriented Programming: ILE promotes a procedural programming paradigm, allowing developers to break down complex tasks into manageable procedures. This granularity enhances code readability and maintainability, crucial for the long-term success of software projects.
Conclusion:
Integrated Language Environment (ILE) has emerged as a cornerstone in IBM i programming, providing a versatile and powerful framework for developers. Its multi-language support, modular design, and procedural programming paradigm contribute to the creation of scalable, maintainable, and efficient applications on the IBM i platform. ILE stands as a testament to IBM’s commitment to innovation and adaptability in the ever-evolving landscape of software development.
Advantages of ILE
Program development is improved with the help of ILE, which is a set of tools associated with system support. The capabilities of this model can be used only by programs produced by the ILE family of compilers. It includes ILE RPG, ILE COBOL,ILE SQLRPG, ILE C, ILE C++, and ILE CL.
The Integrated Language Environment (ILE) is preferable to earlier program models in several ways. Let’s explore the following advantages of ILE:
- Binding:
Static binding is supported by ILE. This indicates that rather than being resolved at runtime, external operations (subroutines, functions, etc.) are resolved during compilation. Better performance is achieved and dynamic binding’s overhead is avoided.
- Modularity:
By enabling you to design distinct modules (programs, service programs, procedures) that can be separately compiled and maintained, ILE encourages modularity.
The reusability of modules across many applications improves the organization and maintainability of the code. - Reusable Components:
You can use ILE to design reusable parts that encapsulate certain functionality, like procedures or service programs. By sharing these parts across several programs, redundancies can be avoided and development efficiency can be increased.
- Common Runtime Services:
ILE offers a collection of standard runtime functions that are compatible with several programming languages (RPG, COBOL, C, etc.), including memory management, error handling, and file input/output. This assures uniform behavior and streamlines cross-language integration.
- Coexistence:
Programs written in various languages can co-exist within a single application because of ILE. The ability to call C functions from RPG programs and vice versa allows for the development of mixed languages.
- Source Debugger:
Developers can find and address problems during program execution with the help of ILE’s powerful source-level debugger. The ability to debug is necessary for effective development and maintenance.
- Better Control over Resources:
ILE gives users more precise control over resources including files, memory, and database connections. Based on particular requirements, developers can optimize resource use.
- Better Control over Language Interactions:
Interaction between languages is made easier with the help of ILE.
Let’s take an example, an RPG program can call a C function directly, enhancing flexibility. - Better Code Optimization:
Advanced code optimization is carried out by ILE compilers, making programs run faster and more effectively. Performance-critical applications especially benefit from this enhancement.
- Foundation for the Future:
ILE provides a strong base upon which to modernize legacy applications and integrate them with new technologies. It helps organizations modernize their systems without compromising their current investments.
ILE Program Concept
Activation Group:
An activation group is a job sub-structure that includes all ILE and service programs. These sub-structures contain the resources necessary to make the programs execute. These resources can be broadly classified into the following categories:
- Dynamic storage
- Static program variables
- Temporary resources for maintaining data;
- Certain kinds of exception handlers and ending procedures;
Activation Group Creation:
When you construct your program or service program, you can define an activation group attribute that will control the development of a non-default activation group at runtime. The ACTGRP parameter on the CRTPGM or CRTSRVPGM commands is used to specify the attribute. There is no Create Activation Group command.
One of the following activation group characteristics are used by all ILE programs:
A user-named activation group:
Specified with the ACTGRP(name) parameter. With the help of this feature, you can operate a group of ILE programs and service programs as a single application. When it is initially required, the activation group is formed. Then, every application and service program that uses the same activation group name makes use of it.
A system-named activation group:
Specified using the CRTPGM command’s ACTGRP(*NEW) option. With this feature, each time the program is called, a new activation group can be created. The name of this activation group is chosen by ILE. The name that ILE gave you is unique to your job. The name you select for a user-named activation group doesn’t match with the name assigned to a system-named activation group. Service programs do not support this attribute.
An attribute to use the activation group of the calling program:
specified with the option ACTGRP(*CALLER). With the use of this feature, you can create an ILE program or service program that will be executed within the caller program’s activation group. When a program or service program is activated using this feature, new activation group is never created.
An attribute to choose the activation group appropriate to the programming language and storage model:
Specified with the CRTPGM command’s ACTGRP(*ENTMOD) option. The program entry procedure module specified by the ENTMOD argument is examined when ACTGRP(*ENTMOD) is given. One of the following may occur:
- If the module attribute is RPGLE, CBLLE, or CLLE, and
if STGMDL(*SNGLVL) is specified, then QILE is used as the activation group. - if STGMDL(*TERASPACE) is specified, then QILETS is used as the activation group.
- If the module attribute is not RPGLE, CBLLE, or CLLE, then *NEW is used as the activation group.
- ACTGRP(*ENTMOD) is the default value for this parameter of the CRTPGM command.
Each activation group in a work has a unique name. Once an activation group exists within a job, it is used to activate programs and service programs which has its name specified. Duplicate activation group names are not allowed within a single project due to this architecture.
The ACTGRP parameter on the UPDPGM and UPDSRVPGM commands can be used to change the activation group into which the program or service program is activated.
Default Activation Groups:
The system creates two activation groups that are used by all OPM programs when a job is initiated. Application programs use one of these activation groups. The other is used for operating system programs.
Static program variables are stored in single-level storage through these OPM default activation groups. The OPM default activation groups cannot be removed. They are deleted by the system when your job ends.
If the following requirements are met, ILE programs and service programs can be activated in the OPM default activation groups:
- The ILE programs or service programs were created with the activation group *CALLER option or with the DFTACTGRP(*YES) option.
- The call to the ILE programs or service programs originates in the OPM default activation groups.
- The ILE program or service program does not use the teraspace storage model.
The operating system will also create a teraspace default activation group when it determines one is needed.
Static program variables are stored in teraspace by the teraspace default activation group.
You cannot delete the teraspace default activation group. When your job is terminated, the system will remove it. If the following requirements are met, ILE programs and service programs may be activated in the Teraspace default activation group:
- The ILE program or service program was created with the activation group *CALLER option.
- The state of the ILE program or service program is *USER.
For the ILE program or service program to be activated into the teraspace default activation group, one of the following criteria must also be satisfied:
- The call to the ILE program or service program originates in the teraspace default activation group and the ILE program or service program was created with either the storage model *INHERIT or the storage model *TERASPACE option.
- The ILE program or service program was created with the storage model *INHERIT option, there are no application entries on the call stack associated with a different activation group and the activation occurs in preparation for one of these invocations:
- SQL stored procedure
- SQL function
- SQL trigger
- The ILE program or service program was created with the storage model *TERASPACE option and there are no call stack entries associated with a teraspace storage model activation group. See Selecting a Compatible Activation Group for additional information.
Non-Default Activation Group Deletion
Activation groups require resources to be created within a job. If an application can reuse an activation group, processing time could be reduced.
To enable you to exit an invocation without terminating or erasing the related activation group, ILE offers various options.
Whether the activation group is deleted depends on the type of activation group and the method in which the application ended.
The following are various ways for an application to go back to a call stack entry linked to a different activation group:
- HLL end verbs: For example, STOP RUN in COBOL or exit() in C.
- Call to API CEETREC
- Unhandled exceptions:Unhandled exceptions can be moved by the system to a call stack entry in another activation group.
- Language-specific HLL return statements:For example, a return statement in C, an EXIT PROGRAM statement in COBOL, or a RETURN statement in RPG or RETURN command in CL.
- Skip operations:For example, sending an exception message or branching to a call stack entry that is not associated with your activation group.
Activation groups can be removed from an application by executing API CEETREC or by utilizing HLL end verbs. Moreover, deletion of your activation group may result from an unhandled exception.
As long as the closest control boundary is the oldest call stack entry connected to the activation group, these actions will always remove your activation group. Control moves to the call stack entry that comes before the control boundary if the closest control boundary is not the oldest call stack item. The activation group isn’t eliminated, though.
Example: ILEACT(Main PGM)
ILEACTPGM1:
ILEACTPGM2:
Output :
WRKJOB (Take Option 18 to see Activation Group)
Before Calling Main Program
After Calling Main Program
Module
A module is a non-runnable object (type *MODULE) which is the output of an ILE compiler. It is the basic building block for creating runnable ILE Objects. This Module object acts as a significant difference between ILE and OPM programs where the output of an OPM compiler is a runnable program.
Below Diagram shows about Modules which are non-executable object that holds procedures and then the same can be bind to *PGM and or *SRVPGM.
Common concepts in ILE RPG, ILE COBOL, ILE C & ILE C++
- Exports
An export is the method of a procedure or data item, coded in a module object, that is available for use by other ILE objects. The export is identified by its name and its associated type, either procedure or data. An export can also be called a definition.
- Imports
An import is the use of, or reference to the name of a procedure or data item not defined in the current module object. The import is identified by its name and its associated type, either procedure or data. An import can also be called a reference.
Since, module object is the basic building block of an ILE runnable object. Hence, when a module object is created, the following may also be generated.
- Debug Data
This is the data necessary for debugging a runnable ILE object.
- Program Entry Procedure (PEP)
This is the compiler-generated code that is the entry point for an ILE program on a dynamic program call. It is similar to the code provided for the entry point in an OPM program.
- User Entry Procedure (UEP)
A user entry procedure, written by a programmer, is the target of the dynamic program call. It is the procedure that gets control from the PEP. The main() function of a C program becomes the UEP of that program in ILE.
Conceptual View of a Module
The below example shows Module object M1 exports 2 procedures (Get_Employee and Upd_Employee) and a data item (rtn_code). The same Module object M1 imports a procedure called Del_Employee. It contains PEP, a corresponding UEP and debug data as well.
Creating a Program with the CRTRPGMOD and CRTPGM Commands
The two-step process of program creation consists of compiling source into modules using CRTRPGMOD and then binding one or more module objects into a program using CRTPGM. This process helps to create permanent modules.
Key Features
- I. Allows to modularize an application without recompiling the whole application.
- II. To reuse the same module in different applications.
Creating a Module Object
An ILE RPG module consists of one or more procedures, data item specifications, and static storage used by all the procedures in the module. It is possible to directly access the procedures or data items in one module from another ILE object.
Following are the procedures that can make up an ILE RPG module,
- Cycle-Main Procedure:
An optional procedure which consists of the set of H, F, D, I, C, and O specifications that begin the source. The cycle-main procedure has its own LR semantics and logic cycle, neither of which is affected by those of other ILE RPG modules in the program.
- Sub-Procedure:
Zero or more procedures, which are coded on P, D, and C specifications which do not use the RPG cycle. A sub procedure may have local storage that is available for use only by the sub procedure itself. One of the sub procedures may be designated as a linear-main procedure, if a cycle-main procedure is not coded.
The main procedure (if coded) can always be called by other modules in the program. Sub-procedures may be local to the module or exported. If they are local, they can only be called by other procedures in the module; if they are exported from the module, they can be called by any procedure in the program.
Module creation consists of compiling a source member, and, if that is successful, creating a *MODULE object. The *MODULE object includes a list of imports and exports referenced within the module.
Note:
- I. A module cannot be run by itself. We must bind one or more modules together to create a program object (type *PGM) or a service program object (type *SRVPGM)
- II. Then the procedures can be accessed within the bound modules through static procedure calls.
Advantages of combining modules into a PGM or a Service PGM:
- I. Allows to reuse the pieces of code which generally results in smaller programs. Smaller programs provide better performance and easier debugging capabilities.
- II. Allows to maintain shared code with less chance of introducing errors to other parts of the overall program.
- III. Manage large programs more effectively. It allows to divide the old program into parts that can be handled separately. If the program needs to be enhanced, only recompiling of the changed modules is sufficient.
Module Creation using CRTRPGMOD Command
Using a standard ILE RPG Compiler Command – CRTRPGMOD, we can create the *MODULE Object. This command can be used interactively, as part of a batch input stream, or from a Command Language (CL) Program.
Example of CRTRPGMOD
CRTRPGMOD MODULE (Module Lib / Module Name) SRCFILE (Source Lib / SRCPF)
Binding Directory
Introduction
The concept of the Binding Directory is crucial to understand while working with ILE programs. Binding Directory in IBM i is an object of type *BNDDIR.
It contains the entries of modules and service programs to be used in an ILE program. So, if we specify a binding directory in an ILE program, it can utilize the procedures present in modules & service programs bound.
Usage
Binding Directory is an important ILE concept & using it in the ILE program provides more efficiency by binding all the required procedures present in multiple modules and service programs in a single binding directory. There are multiple commands available to handle binding directories in IBM i, which then can be used in multiple ILE programs. Thus, reducing the need to write similar code again & again in different programs.
Useful Commands
The following commands are important to understand the concept of binding directory:
CRTBNDDIR (Create binding directory):
This command creates a binding directory object in the particular library. The command syntax can be expressed as:
CRTBNDDIR BNDDIR(*CURLIB/PIOBNDDIR) AUT(*ALL) TEXT('Test Binding Directory')
The noticeable parameters of this command are:
- BNDDIR: In this parameter, we have to specify the name of the library & binding directory object to be created. If we specify *CURLIB, the binding directory will be created in the current library.
- AUT: In this parameter, we have to specify the authority available to the users of the binding directory being created.
- TEXT:In this parameter, we can specify any text description for the *BNDDIR object being created.
If the user will do F4 on the CRTBNDDIR command, the below-mentioned prompt will be available:
ADDBNDDIRE (Add entry to binding directory):
This command adds entries of either modules or service programs to an existing binding directory. The command syntax can be expressed as:
ADDBNDDIRE BNDDIR(PIOLIB/PIOBNDDIR) OBJ((PIOLIB/PIOMOD1 *MODULE) (PIOLIB/PIOSRVPGM1 *SRVPGM *IMMED)) POSITION(*LAST)
The noticeable parameters of this command are:
- BNDDIR: In this parameter, we have to specify the name of the library & binding directory to which we are adding the entry.
- If we specify *CURLIB, the binding directory from the current library is picked.
- If we specify *LIBL, the binding directory will be picked from the library mentioned first in the library list.
- OBJ: In this parameter, we mention the details of all object entries (either *MODULE or *SRVPGM) we want to add in the binding directory.
- POSITION: In this parameter, we mention the level at which the object entry will be located in the BNDDIR.
If the user will do F4 on the ADDBNDDIRE command, the below-mentioned prompt will be available:
DSPBNDDIR (Display Binding Directory):
This command is utilized to display the details of the objects bound in an existing binding directory. The command syntax can be expressed as:
DSPBNDDIR BNDDIR(PIOLIB/PIOBNDDIR) OUTPUT(*)
The noticeable parameters of this command are:
BNDDIR: In this parameter, we have to specify the name of the library and binding directory that we want to be displayed.
OUTPUT: In this parameter, if we will mention *PRINT value then output from the command will be printed to the job’s spooled output. If *OUTFILE is mentioned then we can store the output of the command in the database file mentioned.
If the user will do F4 on the DSPBNDDIR command, the below-mentioned prompt will be available:
If we run this command for the binding directory PIOBNDDIR, the following result will be visible:
WRKBNDDIR (Work with Binding Directory):
Through this command, we can do multiple operations with a binding directory like create, delete, and display. Also, we can do operations with the entries of the binding directory like add an entry and remove an entry. The command syntax can be expressed as:
WRKBNDDIR BNDDIR(PIOLIB/PIOBNDDIR)
The noticeable parameter of this command is:
- BNDDIR: In this parameter, we have to specify the name of the library and binding directory that we want to work with.
If the user will do F4 on the WRKBNDDIR command, the below-mentioned prompt will be available:
If we run this command for the binding directory PIOBNDDIR, the following result will be visible:
DLTBNDDIR (Delete Binding directory)
This command is used to delete an existing binding directory object from a particular library. The command syntax can be expressed as:
DLTBNDDIR BNDDIR(PIOLIB/PIOBNDDIR)
The noticeable parameter of this command is:
- BNDDIR: In this parameter, we have to specify the library name and binding directory we want to delete.
If the user will do F4 on the DLTBNDDIR command, the below-mentioned prompt will be available:
Restrictions
The restrictions a user can have on a binding directory depend on the authority level provided while executing CRTBNDDIR command.
- *LIBCRTAUT: If we specify this AUT value, the binding directory will be created with the authority level same as specified in the CRTAUT parameter while creating the library (using the CRTLIB command) in which the BNDDIR object is being created now.
- *CHANGE: If we specify this AUT value, the user can do all basic operations & also be able to change the BNDDIR.
- *ALL: If we specify this AUT value, the user can do all the operations.
- *USE: If we specify this AUT value, the user can do all basic operations but cannot change the BNDDIR.
Procedure
The main procedure is specified by everything that comes before the first procedure specification, so no special coding is required. The global Definition specifications allow for the coding of the main procedure’s parameters using either a prototype and procedure interface or a *ENTRY PLIST in the main procedure’s calculations.
It is assumed that the procedure interface for the main procedure is any interface for procedures found in the global definitions. The prototype with the same name must come before the procedure interface in the source, and the name is required for the procedure interface for the main procedure.
The name of the module that is being created and the main procedure must match. Either use this name for the prototype and procedure interface or include it in the prototype’s EXTPROC keyword.
Sample code:
Ctl-Opt DftActGrp(*No) Main(MainProc); Dcl-proc MainProc; Dcl-s String char(30); String = 'This is MainProc'; *Inlr = *On; End-proc;
To define the main procedure as a program, you can also use a prototype and procedure interface. In this case, the prototype’s EXTPGM keyword would be specified.
Sample code:
Dcl-Pr CheckObj extpgm('CHECKOBJ'); Object char(10); Library char(10); Found ind; End-pr; CheckObj(ObjectName:Library:Found); Dcl-Pi CheckObj; Object char(10); Library char(10); Found ind; End-Pi; If Found = '1'; Dsply 'Object found'; Else; Dsply 'Object not found'; Endif;
Subprocedure:
A procedure that follows the main source section is called a subprocedure. subprocedure differs from a main procedure primarily in that:
- Names that are defined within subprocedure are not accessible outside the subprocedure.
- No cycle code is generated for the subprocedure.
- The call interface must be prototyped.
- Calls to subprocedures must be bound procedure calls.
- Only P, F, D, and C specifications can be used.
- Other than being called through a program call rather than a bound call, a linear-main procedure is just like any other subprocedure.
Because the data items in subprocedures are local, they can offer independence from other procedures. Typically, local data items are kept in automatic storage, which means that the value of a local variable is not preserved between calls to the procedure.
Another feature provided by subprocedures. A subprocedure can be called in an expression to return a value, and parameters can be passed to it by value.
Below figure illustrates the possible layout of a module with multiple procedures:
Subprocedure Definition:
Sample code:
Dcl-Proc Calculator Export; Dcl-Pi Calculator Zoned(10:0); Num1 Zoned(4:0); Num2 Zoned(4:0); Num3 Zoned(4:0); End-Pi; Dcl-S Result Zoned(10:0); Result = Num1 * 10 + Num2 + Num3 - 15; Return Result; End-Proc;
Dcl-s Number1 Zoned(4:0) Inz(20); Dcl-s Number2 Zoned(4:0) Inz(30); Dcl-s Number3 Zoned(4:0) Inz(40); Dcl-s Result Zoned(10:0); Dcl-pr Calculator Zoned(10:0); Num1 Zoned(4:0); Num2 Zoned(4:0); Num3 Zoned(4:0); End-Pr;
Result = Calculator(Number1:Number2:Number3);
- A prototype that includes the name, any parameters, and any return value.
- “Dcl-Proc” keyword will begin a procedure.
- A definition of a Procedure-Interface which defines any parameters and the return value. The corresponding prototype and the procedure interface must match. If the subprocedure does not return a value and receives no parameters, the procedure-interface definition is not required.
- Additional definition specifications for prototypes, constants, and variables required by the subprocedure. These definitions are local definitions.
- Any standard or free form calculation specifications required to complete the procedure’s task. Both local and global definitions may be used in the calculations. The subprocedure contains any local subroutines. They are only useful within the subprocedure. A RETURN operation must be included in the subprocedure if it returns a value.
- The “End-Proc” Keyword indicates the end of a procedure.
Subprocedure Scope:
A subprocedures defined items are all local. When a local item and a global data item is defined with the same name, the local definition is used for all references to that name within the subprocedure.
- Subroutine names and tag names are known only to the procedure, even those defined in the main procedure, in which they are defined.
- Every field specified in the specifications for the input and output is global. Even if there is a local variable with the same name, the global name is used when a subprocedure uses input or output specifications (for example, while processing a read operation).
Subprocedure calculations:
A subprocedure does not have its own RPG cycle code, so it needs to be coded differently than a main procedure. When one of the following happens, the subprocedure ends:
- A RETURN operation is processed.
- The final computation within the subprocedure is processed.
Service Program
Introduction
A service program is a collection of available data items and runnable procedures that other ILE programs or service programs can directly and easily access. A service program is similar to a subroutine library or procedure library in many aspects.
The name “service program” refers to the common services that these programs offer that other ILE objects might need.
A service program’s public interface is consisting of the names of the exported procedures and data items that other ILE objects can access. A service program can only export items that are exported from the module objects that make up the service program.
Which processes or data items are known to other ILE objects can be specified by the programmer. Therefore, private, or hidden processes and data within a service program can exist and become unavailable to other ILE objects.
A service program can be updated without requiring the other ILE programs or service programs that use the updated service program. Whether a change is compatible with the current support is up to the programmer who is making the changes to the service program.
Characteristics of an ILE *SRVPGM object:
- To create the *SRVPGM object, one or more modules are copied from any ILE language.
- There is no PEP connected to the service program. A dynamic program call to a service program is not valid as there is no PEP. The PEP of a module is ignored.
- The public interface identifies this service program’s exports, which are available for use by other ILE programs or service programs.
- The procedure and data item names that are exported from the service program are used to generate signatures.
- As long as previous signatures are still supported, service programs can be changed without impacting ILE programs or service programs that utilize them.
- Modules can have debug data.
- It is only possible to export weak data to an activation group. It cannot be included in the exported public interface from the service program.
Create and use the service program:
MODULECALL1:
**Free Dcl-Pr Addition Zoned(5:0); Num1 Zoned(2:0); Num2 Zoned(2:0); End-Pr; Dcl-Pr Subtraction Zoned(5:0); Num1 Zoned(2:0); Num2 Zoned(2:0); End-Pr; Dcl-S Number1 Zoned(2:0) Inz(60); Dcl-S Number2 Zoned(2:0) Inz(20); Dcl-S Output Zoned(5:0); Output = Addition(Number1:Number2); Dsply Output; Output = Subtraction(Number1:Number2); Dsply Output; *Inlr = *On;
MODULE1:
**Free Dcl-Proc Addition Export; Dcl-Pi Addition Zoned(5:0); Num1 Zoned(2:0); Num2 Zoned(2:0); End-Pi; Dcl-S Result Zoned(5:0); Result = Num1 + Num2; Return Result; *Inlr = *On; End-Proc;
MODULE2:
**Free Dcl-Proc Subtraction Export; Dcl-Pi Subtraction Zoned(5:0); Num1 Zoned(2:0); Num2 Zoned(2:0); End-Pi; Dcl-S Result Zoned(5:0); Result = Num1 - Num2; Return Result; *Inlr = *On; End-Proc;
- Create Module MODULECALL1, MODULE1, and MODULE2.
CRTRPGMOD MODULE(DEMOLIB/MODULECALL) SRCFILE(DEMOLIB/QRPGLESRC) CRTRPGMOD MODULE(DEMOLIB/MODULE1) SRCFILE(DEMOLIB/QRPGLESRC) CRTRPGMOD MODULE(DEMOLIB/MODULE2) SRCFILE(DEMOLIB/QRPGLESRC)
- To create a service program SRVPGM1, use the CRTSRVPGM command.
CRTSRVPGM SRVPGM(DEMOLIB/SRVPGM1) MODULE(DEMOLIB/MODULE1 DEMOLIB/MODULE2) EXPORT(*ALL)
- To display a service program, use the DSPSRVPGM command.
DSPSRVPGM SRVPGM(DEMOLIB/SRVPGM1)
- To update a service program, use the UPDSRVPGM command.
- Now bind the service program SRVPGM1 to the calling program CALLPGM1.
CRTPGM PGM(DEMOLIB/CALLPGM1) MODULE(DEMOLIB/MODULECALL) BNDSRVPGM((DEMOLIB/SRVPGM1))
Signature
What is a Service Program Signature?
Every service program has at least one signature. The signature is generated by the system when a Create Service Program (CRTSRVPGM) command specifying Export(*ALL) or Export(*SRCFILE) is issued.
The signature value is generated by an algorithm that uses as input the names of all of the service programs “exports” and their sequence. Here, “exports” are nothing but external procedures and external variables, constants etc. that are exported so that those can be used/called in other programs.
In the above screenshot, a Service Program ‘CUSTSTUFF’ is created by binding 2 modules -Module1 and Module2.
What is the use of a Signature?
A signature is a value that provides a similar check for service programs that a level check does for files. It helps ensure that changes made to the service program are done in such a way that the programs using them can still function properly.
Where can we see the Signature of a Service Program?
You can see the current signature value for a service program by using the Display Service Program (DSPSRVPGM) command. The current signature appears on the first screen of the command output.
We can see the Signature of the Service Program ‘CUSTSTUFF’ in the last row.
If you continue to press Enter on each screen displayed by the command, screen 9 (of 10)
shows all the valid signatures, including any valid previous signatures.
In addition to these, we can also see the modules and the list of external procedures that are bound to the Service Program.
What causes Signature to Change?
A signature changes when the list of exports for the service program changes. The most common cause of a signature change is adding a new procedure to the service program.
By using ‘UPDSRVPGM’ we are updating the Service Program ‘CUSTSTUFF’ as a new external procedure is added in one of the modules.
In the above screenshot, we can see a new Export is added in the Service Program. That causes the change in Signature as in the below screenshot.
Note: A change in a procedures’ parameters doesn’t change the service program’s signature.
What happens when Signature changes?
When a program is called, it immediately checks the signature value of any service programs that it uses and produces a signature violation message if the signatures don’t match. This happens at program start up not when you call a service program procedure.
The program ‘CUSTPGM’ is created by binding the Service Program ‘CUSTSTUFF’ when it had only 4 external procedures.
As you can see, the Signature of ‘CUSTSTUFF’, the ‘CUSTPGM’ is referring to is the older one. (The command used to display this is DSPPGM against the program ‘CUSTPGM’.)
Now that the Signature of the Service Program is changed after the addition of a new Procedure, a call to ‘CUSTPGM’ results in an error.
When we see the job log, we realise it is a Signature Violation error.
How to correct a Signature violation?
Use the Update Program (UPDPGM) command. If any signatures have changed, it also updates the program’s signature values.
Note: You don’t need to specify the service program because the UPDPGM command automatically re-checks the signatures of any bound service programs.
As you can see, the Signature to which ‘CUSTPGM’ refers to is matching with the current Signature of ‘CUSTSTUFF’.
Now, a call to ‘CUSTPGM’ does not result in any error.
Conclusion
In our example, we have only one program that uses the Service Program ‘CUSTSTUFF’. What if there are some 20 programs that use a Service Program, and we have to find these 20 among 100 programs?.
In this case, to resolve the ‘Signature Violation’ error by, finding out which 20 are using the concerned Service Program –
- We must execute DSPPGM against 100s of programs.
- We can use SQL to qsys2.bound_module_info (v7r3 and above) or use APIs QBNLPGMI/QBNLSPGM to get the list of programs that use the service program.
to find out which 20 are actually using the concerned Service Program and And, do execute UPDPGM on those 20 programs manually.
Needless to say, it is a cumbersome process, unless, you have an appropriate change management tool which would do all the updates/recompilations automatically.
The solution to this problem is ‘Binder Language’.
Binder Language
Binder language commands & parameters
Below are the instructions which are typically used to write the export source file in binder language:
- STRPGMEXP & ENDPGMEXP commands
- STRPGMEXP command specifies the start of the list of exported symbols (or procedures) and it is paired by ENDPGMEXP command to specify the end of this export list.
- There can be multiple blocks of STRPGMEXP-ENDPGMEXP inside a binder language.
- Every block specifies different export list & corresponding signature of the service program.
- PGMLVL parameter
- This parameter is used with STRPGMEXP command used in binder language; and it specifies which STRPGMEXP-ENDPGMEXP block export list should be used to create the latest/current signature of the service program.
- “*CURRENT” & “*PRV” are the values which are used with PGMLVL command.
- Only one PGMLVL parameter can have *CURRENT value (It shows that the STRPGMEXP-ENDPGMEXP block used with PGMLVL (*CURRENT) parameter is the current signature of the service program).
- Apart from one PGMLVL having *CURRENT value, rest all PGMLVL parameters in an export source file must have PGMLVL values as *PRV.
- SIGNATURE parameter
- This parameter is also provided with STRPGMEXP command but it’s not mandatory. It is used to explicitly provide a signature for the export list.
- *GEN (Default) and explicit name provided by programmers are possible values which are used with SIGNATURE parameter.
- A possible situation to provide an explicit name for signature is that there is a change in parameters of an existing procedure of the service program (and no other change), even after recompiling the module and updating the service program the system can generate the same signature (because the signature depends on the list of exported procedures and their order in the export list and not on the parameters of procedures).For this case we can provide a different signature name while updating the service program to force all programs to recompile.
- EXPORT command:
- The EXPORT command is used to provide the name of the procedure to be exported from service program using SYMBOL parameter.
If the symbol name contains lower case character, use apostrophe while mentioning the name in SYMBOL parameter.
- The EXPORT command is used to provide the name of the procedure to be exported from service program using SYMBOL parameter.
Binder Language examples
Below are some samples of binder language sources:
- Example (1) : How to use binder language source in service program creation
There is a module named NUMERICOPS with below procedures:To create a service program named NUMERICOPS, while providing only the functionality of adding numbers (export only “AddNumbers” procedure), we can create below binder language source file.
While creating the service program we can provide the binder language source file as below:
- Example (2): Maintaining multiple signatures for the service program
For the service program created in example 1, below is the signature which has been created (we can see this signature using DSPSRVPGM SRVPGM(NUMERICOPS) DETAIL(*SIGNATURE) command):
Now to start exporting “SUBTRACTNUMBERS” procedure, we modify the binder language as below:
Create the service program again using the binder language export list file (through CRTSRVPGM command given in example 1). After re-creating the service program, we see below signatures available for service program:
As the older signature is still maintained with service program, we will not need to recompile older modules bound with service program.
- Example (3): Impact of “SIGNATURE” parameter in binder language
As we can see, in example 2, the SIGNATURE parameter has been used with STRPGMEXP command.
Now, let’s say we change the “ADDNUMBERS” procedure to accept three numbers instead of two and modify the returning parameter length from 8 to 10 numeric as below (compare with example 1 source to see the modifications):
If we re-create the program using the same binder language source (no modifications made in SIGNATURE parameter), there will be no impact on the signatures of the service program because no new procedure added/no existing procedure removed AND no change done in the order of the procedures being exported earlier.
Below are the signatures after recompiling the module and re-creating the service program:We can observe that these are same signatures which were generated in example 2 (Given above again for quick reference/comparison).
To resolve this issue (to generate a new signature when there is a change in the parameters of the procedure), we can modify the SIGNATURE parameter (for this example, modified it from “ADD2_SUB2” to “ADD3_SUB3”):
Now, if we re-create the service program, we find below signatures and we can see that new signature has been generated which will force the recompilation of existing programs.
Additional tips
Tip 1: To retrieve the current binder language source file for the service program, use RTVBNDSRC command.
Tip 2: While creating the service program, to see the errors in compilation listing related to binder language, use parameter DETAIL(*EXTENDED) or DETAIL(*FULL).
Tip 3: There should be no change in the order of the symbols exported earlier while providing a new STRPGMEXP-ENDPGMEXP block, otherwise it will create instability in service program processing.
Bind By Reference
Signature Violation
The signature violation error occurs when the signature IDs of a program and its associated service program don’t match.
Signature IDs
A signature ID is a special identification associated with a program or service program in the context of service programs. It guarantees that the interfaces of the caller and the callee are compatible. When a program calls for a procedure through a service program, the system verifies if the signature IDs match. A signature violation error happens if they don’t.
Common Causes
Service Program Changes: A service program’s signature ID changes when it is modified, such as when new modules are added.
Existing Callers: It’s possible that older signature IDs are still cached in programs that refers the service program.
Example:
MODULECALL1:
**Free Dcl-Pr Addition Zoned(5:0); Num1 Zoned(2:0); Num2 Zoned(2:0); End-Pr; Dcl-S Number1 Zoned(2:0) Inz(60); Dcl-S Number2 Zoned(2:0) Inz(20); Dcl-S Output Zoned(5:0); Output = Addition(Number1:Number2); Dsply Output; *Inlr = *On;
MODULE1:
**Free Dcl-Proc Addition Export; Dcl-Pi Addition Zoned(5:0); Num1 Zoned(2:0); Num2 Zoned(2:0); End-Pi; Dcl-S Result Zoned(5:0); Result = Num1 + Num2; Return Result; *Inlr = *On; End-Proc;
- Create Module MODULECALL1, MODULE1
CRTRPGMOD MODULE(DEMOLIB/MODULECALL) SRCFILE(DEMOLIB/QRPGLESRC) CRTRPGMOD MODULE(DEMOLIB/MODULE1) SRCFILE(DEMOLIB/QRPGLESRC)
- To create a service program SRVPGM1, use the CRTSRVPGM command.
CRTSRVPGM SRVPGM(DEMOLIB/SRVPGM1) MODULE(DEMOLIB/MODULE1) EXPORT(*ALL)
- Now bind the service program SRVPGM1 to the calling program CALLPGM1 and call the CALLPGM1 program.
CRTPGM PGM(DEMOLIB/CALLPGM1) MODULE(DEMOLIB/MODULECALL) BNDSRVPGM((DEMOLIB/SRVPGM1))
Result:
- Now adding the SUBTRACTION procedure to MODULE1 module and updating the SRVPGM1 service program using UPDSRVPGM command. Signature ID of associated service program will change.
- After updating the SRVPGM1 service program and calling the CALLPGM1 program it will get the Program signature violation error.
Avoiding Signature Violation error
Rebind Programs: Rebind all programs that refer the modified service program. By doing this, you can be sure that their signature IDs have been updated with the new program signature.
Manage Signatures: As an alternative, you can manage service program signatures by creating binder language, which is used during program compilation. By doing this, you can manage the handling of signature IDs without having to recompile every caller.
Binder Language
A service program’s exports are defined by a small set of nonrunnable commands known as the binder language. When a BND source type is specified, the binder language allows the source entry utility (SEU) syntax checker to prompt and verify the input.
The binder language consists of a list of the following commands:
- Start Program Export (STRPGMEXP) command, which identifies the beginning of a list of exports from a service program
- Export Symbol (EXPORT) commands, each of which identifies a symbol name available to be exported from a service program
- End Program Export (ENDPGMEXP) command, which identifies the end of a list of exports from a service program
The public interface to a service program is defined by the symbols found between a pair of STRPGMEXP PGMLVL(*CURRENT) and ENDPGMEXP symbols. A signature serves as a representation of that public interface. A value known as a signature identifies the interface that a service program supports.
You can add new exports to the end of the list of exports, so your binder language source only needs one export block if you choose to specify an explicit signature.
If you choose not to specify an explicit signature, the binder generates a signature from the list of procedure and data item names to be exported and from the order in which they are specified.
Each time you add a new export to your service program, you have to create a new export block in your binder source.
The first entry in a list of exports from a service program is indicated by the Start Program Export (STRPGMEXP) command. A service program’s list of exports can be ended with the End Program Export (ENDPGMEXP) command.
Multiple signatures are produced when a source file contains multiple STRPGMEXP and ENDPGMEXP pairs. There is no significance to the order in which the STRPGMEXP and ENDPGMEXP pairs occur.
PGMLVL(*CURRENT) can only be specified by one STRPGMEXP command, though it need not be the first one. A source file’s other STRPGMEXP commands must all include PGMLVL(*PRV). PGMLVL(*CURRENT)-specified STRPGMEXP commands are represented by the current signature.
You can explicitly specify a signature for a service program using the signature (SIGNATURE) parameter.
A character string or a hexadecimal string can be used as the explicit signature.
The binder generates a signature from exported symbols when the signature parameter’s default value, *GEN is used.
The STRPGMEXP command’s level check (LVLCHK) parameter indicates whether the binder will automatically check a service program’s public interface.
A symbol name that can be exported from a service program is identified by the Export Symbol (EXPORT) command.
Example:
- You can create binder source file using STRSEU command with BND type.
- Exported the procedure ADDITION by Export symbol (EXPORT) command.
- Create service program using CRTSRVPGM command and export the BNDSRC binding source file.
- See the Signature ID of recently created service program SRVPGM2.
- After binding the SRVPGM2 service program to the CALLPGM2 and call the CALLPGM2.
- Specify another STRPGMEXP and ENDPGMEXP pair with PGLVAL(*CURRENT) parameter. And exported the new procedure SUBTRACTION.
- Updating the existing service program SRVPGM2 with BNDSRC source file. Also, one more signature will be added with old signature.
- Call the CALLPGM2 program and this time you will not get signature violation error.
Bind By Copy
Bind by copy is nothing but static binding. Binding a module with "Bind by copy" means entire module object definition is copied into the bounded program. Once you create a program object with module using bind by copy, you can delete the module object, but the program still gets executed.
The modules specified on the MODULE parameter of commands like CRTPGM, CRTSRVPGM etc. are always bound by copy. Refer below screenshots for more info.
Module parameter from CRTSRVPGM
Even if you add modules to a binding directory and bind them to program using binding directory, then it is bind by copy.
Advantages of Bind by Copy
If you bind a module to a program using bind by copy, there will not be any overhead in loading and executing the program. So, the program execution gets faster.
Disadvantages of Bind by copy
If you have simple piece code in module and bind it in multiple programs using bind by copy, each program will have the copy of the code and that increases the program size.
Also, if you need to make any modifications to that piece of code, you need to modify the module and need to recompile all the programs that are bounded the module. In such scenarios, maintaining list of bounded programs is another overhead.
Below is a simple code to print some patterns:
Module 1
CRTRPGMOD MODULE(QTEMP/PATTERN) SRCFILE(DEMOLIB/QRPGLESRC)You can create the program using the command below.
CRTPGM PGM(QTEMP/PATTERN) MODULE(QTEMP/PATTERN)Check the objects using the below command.
WRKOBJ QTEMP/PATTERN
Now delete the module by taking option 4
Now call the program
SQL on IBM i
Stored Procedures
Introduction
An SQL function that you define and may call is known as a user-defined function (UDF). A UDF’s logic usually extends or improves SQL by adding capabilities that SQL lacks or can’t accomplish adequately, just like with built-in functions you may call from SQL. You may also encapsulate functionality with a UDF and call it again from different places within your code.
Stored procedure types
There are two categories into which stored procedures can be divided:
SQL stored procedures External stored procedures.
- SQL stored procedures
- SQL stored procedures are written in the SQL language. This makes it easier to port stored procedures from other database management systems (DBMS) to the iSeries server and from the iSeries server to other DBMS. Implementation of the SQL stored procedures is based on procedural SQL standardized in SQL99.
Example 1: Create a stored procedure to update a balance for customer using input parameters.
- Create a table as BANK using RUNSQLSTM command.
CREATE OR REPLACE TABLE BANK ( CUSTNO NUMERIC(10), CUSTNAME VARCHAR(30), BALANCE DECIMAL(9,2), PRIMARY KEY(CUSTNO) ) RCDFMT RBANK;
- Create a SQL Stored procedure to update BALANCE column in BANK table based on input parameter:
CREATE OR REPLACE PROCEDURE UPDATECUSTBANK (IN IN_CUSTNO NUMERIC(10), IN IN_BALANCE DECIMAL(9,2)) LANGUAGE SQL MODIFIES SQL DATA UPDATE BANK SET BALANCE = BALANCE + IN_BALANCE WHERE CUSTNO = IN_CUSTNO
- Now add the BALANCE as 500 for CUSTNO-100,003 in the BANK table:
CUSTNO CUSTNAME BALANCE 100,001 JAMES 100.15 100,003 BRYAN 500.00 100,002 JOHN 214.00 100,004 RICHS 854.00 100,005 RHONDA 10,000.00
- Now add the BALANCE as 500 for CUSTNO-100,003 in the BANK table using CALL statement in STRSQL:
CALL UPDATECUSTBANK(100003,500) CALL statement complete.
CUSTNO CUSTNAME BALANCE 100,001 JAMES 100.15 100,003 BRYAN 1000.00 100,002 JOHN 214.00 100,004 RICHS 854.00 100,005 RHONDA 10,000.00
Example 2: Create a stored procedure with CASE condition to update a balance based on a performance for customer using input parameters.
- Create a stored procedures using RUNSQLSTM command.
CREATE OR REPLACE PROCEDURE INCREMENTSALARY (IN IN_CUSTNO NUMERIC(10), IN IN_RATING CHAR(10) ) LANGUAGE SQL MODIFIES SQL DATA CASE IN_RATING WHEN 'GOOD' THEN UPDATE BANK SET BALANCE = BALANCE * 10 WHERE CUSTNO = IN_CUSTNO; WHEN 'AVG ' THEN UPDATE BANK SET BALANCE = BALANCE * 5 WHERE CUSTNO = IN_CUSTNO; WHEN 'BAD ' THEN UPDATE BANK SET BALANCE = BALANCE * 1 WHERE CUSTNO = IN_CUSTNO; END CASE
- Now increment the BALANCE as 10x for CUSTNO-100,001 in the BANK table using CALL statement in STRSQL:
CALL INCREMENTSALARY(100001,’GOOD’) CALL statement complete.
- CUSTNO 100001 balance updated successfully.
CUSTNO CUSTNAME BALANCE 100,001 JAMES 1,001.50 100,003 BRYAN 1,000.00 100,002 JOHN 214.00 100,004 RICHS 854.00 100,005 RHONDA 10,000.00
- External stored procedures
- An external stored procedure is written by the user in one of the programming languages on the iSeries server. You can compile the host language programs to create *PGM objects. To create an external stored procedure, the source code for the host language must be compiled so that a program object is created. Then the CREATE PROCEDURE statement is used to tell the system where to find the program object that implements this stored procedure. The stored procedure registered in the following example returns the name of the supplier with the highest sales in a given month and year.The procedure is implemented in ILE RPG with embedded SQL:
c/EXEC SQL c+ CREATE PROCEDURE HSALE c+ (IN YEAR INTEGER , c+ IN MONTH INTEGER , c+ OUT SUPPLIER_NAME CHAR(20) , c+ OUT HSALE DECIMAL(11,2)) c+ EXTERNAL NAME SPROCLIB.HSALES c+ LANGUAGE RPGLE c+ PARAMETER STYLE GENERAL c/END_EXEC
- The following SQL CALL statement calls the external stored procedure, which returns a supplier name with the highest sales:
c/EXEC SQL c+ CALL HSALE(:PARM1, :PARM2, :PARM3, :PARM4) c/END-EXEC
- An external stored procedure is written by the user in one of the programming languages on the iSeries server. You can compile the host language programs to create *PGM objects. To create an external stored procedure, the source code for the host language must be compiled so that a program object is created. Then the CREATE PROCEDURE statement is used to tell the system where to find the program object that implements this stored procedure. The stored procedure registered in the following example returns the name of the supplier with the highest sales in a given month and year.The procedure is implemented in ILE RPG with embedded SQL:
User Defined Functions
An SQL function that user defines and may invoke is known as a user-defined function (UDF). A UDF’s logic usually extends or improves SQL by adding capabilities that SQL lacks or cannot accomplish adequately, just like with built-in functions, you may invoke it from SQL. You may also encapsulate functionality with a UDF and call it again from different places within your code.
Example 1: Program based UDF
The table structure that will be utilized in the example is listed below.
Table1- EMPLOYEE
R REMP EMPID 10S 0 EMPNAME 20A EMPJOIND 8A K EMPID
Create a procedure in NOMAIN module as GET0001 for retrieving a day for a date compile it using the 15 option from PDM.
ctl-opt nomain; dcl-proc getdays export; dcl-pi getdays char(3); in_date char(8) const; end-pi; dcl-s day char(3); dcl-s dowk packed(1:0); dowk = %rem(%diff(%date(in_date:*iso0):d'0001-01-01': *days): 7); select; when dowk = 0; day = 'MON'; when dowk = 1; day = 'TUE'; when dowk = 2; day = 'WED'; when dowk = 3; day = 'THU'; when dowk = 4; day = 'FRI'; when dowk = 5; day = 'SAT'; when dowk = 6; day = 'SUN'; endsl; return day ; end-proc;
Create a service program and bind the module with it.
CRTSRVPGM SRVPGM(PIOLIB/GETDATSV) MODULE(PIOLIB/GET0001) EXPORT(*ALL)
Now create a user-defined function (UDF) that can access this module.
create function rtvday( in_date char(8))
returns char(3)
language rpgle
deterministic
no sql
external name 'PIOLIB/GET0001(GETDAYS)'
parameter style general
program type sub
RTVDAY was created, changed, or dropped, but object not modified.
Using UDF in SQL query:
SELECT EMPID, EMPNAME, EMPJOIND , RETDAY(EMPJOIND) FROM PIOLIB.EMPLOYEE
Result Set
EMPID EMPNAME EMPJOIND RETDAY ( EMPJOIND )
10,001 JAMES 20240319 TUE
10,001 KINGS 20220219 SAT
10,001 YORKS 20200110 FRI
Example 2: SQL based UDF.
create or replace function priority(indate date) returns char(7) language sql begin return( case when in_date < current date then 'NONE' when in_date <= current date +2 days then 'HIGH' else 'MEDIUM' end ); end
Invoke the UDF from an SQL query as shown below:
SELECT EMPID, EMPNAME, EMPJOIND, EMPORDER, PRIORITY(EMPJOIND) FROM PIOLIB/EMPLOYEE
Result Set
EMPID EMPNAME EMPJOIND EMPORDER PRIORITY
10,001 EMP1 20221005 04/04/24 NONE
SQL Triggers
Introduction
Why do we need Triggers?
- Monitors Database Activity
- Maintain Database Consistency and Enforce business rules
- Concept of Trigger
- Insert, Delete, Update on Table/View
- Called By Database
- Allow granularity at the column level
- UDF/stored procedure
Simple SQL Triggers Example
- Trigger Condition
- Trigger Action
- Example#1: After Trigger
- Example#2 Before Trigger
- Example#3 Multi condition Triggers
- Example#4 Conditional Triggers
- Example#5 Trigger Using Stored Procedure
How to create SQL triggers
How to see SQL triggers
How to remove SQL triggers
Introduction
When an add, remove, or update operation is made to a table, triggers offer a mechanism to keep an eye on, adjust, and manage the tables
When there is significant interdependence between the tables or when a certain action has to be taken in response to a table change, it is highly beneficial.
Why do we need Triggers?
- Monitors Database Activity
-
Only when there is an addition, deletion, or insert made to the trigger-associated table does the trigger become active.
-
- Maintain Database Consistency and Enforce business rules
-
When two or more tables are co-linked to one another, any changes made to the trigger-associated table will cause all related tables to synchronize with one another.
Additionally, the trigger’s action will modify the related table to achieve this synchronization.
-
- Concept of Trigger
-
A certain set of activities are carried out upon the trigger’s execution.
-
- Insert, Delete, Update on Table/View
-
On insert, remove, or upon adding a record to the table or view, a trigger can be inserted.
-
- Called By Database
-
The database (database management system) itself calls the specific activities when the trigger performs them.
-
- Execute actions that are not database-related.
-
The trigger can also be used for non-database tasks, such as emailing or sending messages.
-
- Allow granularity at the column level
-
Instead of adding a trigger to the entire table, you may apply it to a single column.
-
- UDF/stored procedure
-
When the triggering process is carried out, SQL triggers are called by the database management system, which may then run UDF or SQL stored procedures.
-
Simple SQL Triggers Example
The table structures that will be utilized in the example are listed below.
Table1- TGRPF1 A R RTGRPF1 A CUSTID 9P 0 A NAME 10A A DEPARTMENT 50A A K CUSTID
Table2 – TGRPF2 A R RTGRPF2 A TRGTIME Z COLHDG('Trigger' 'time') A JOBNAME 28A COLHDG('Job' 'name') A TRGTYPE 2A COLHDG('Trigger' 'type')
- Trigger Condition
-
The trigger will be added to TGRPF1 and it will get triggered when any record is inserted in the table.
-
- Trigger Action
-
Newly Added information will be logged in TGRPF2 field on each row insert.
-
- Example#1: After Trigger
-
The trigger name that will be activated upon the insertion of a record in TGRPF1 is New Customer.
CREATE TRIGGER NEW_CUSTOMER AFTER INSERT ON TGRPF1 FOR EACH ROW MODE DB2ROW INSERT INTO RTGRPF2 VALUES(CURRENT TIMESTAMP, JOB_NAME, 'I') ;
-
- Example#2 Before Trigger
-
The trigger name that will be activated Before to a record being placed into TGRPF1 is New Customer.
CREATE OR REPLACE TRIGGER NEW_CUSTOMER BEFORE INSERT ON TRGPF1 FOR EACH ROW MODE DB2ROW INSERT INTO RTGRPF2 VALUES (CURRENT TIMESTAMP, JOB_NAME, 'I');
-
- Example#3 Multi condition Triggers
-
01 CREATE OR REPLACE TRIGGER NEW_CUSTOMER 02 AFTER INSERT OR DELETE OR UPDATE ON TRGPF1 03 REFERENCING NEW ROW AS N OLD ROW AS O 04 FOR EACH ROW MODE DB2ROW 05 BEGIN 06 DECLARE TSTAMP TIMESTAMP; 07 IF INSERTING THEN 08 INSERT INTO TRGPF2 VALUES (CURRENT TIMESTAMP, JOB_NAME, 'I' ) ; 09 END IF; 10 IF DELETING THEN 11 INSERT INTO TRGPF2 VALUES(CURRENT TIMESTAMP, JOB_NAME, 'D') ; 12 END IF ; 13 IF UPDATING THEN 14 SET TSTAMP = CURRENT TIMESTAMP ; 15 INSERT INTO TRGPF2 VALUES(TSTAMP, JOB_NAME, 'U') ; 16 END IF ; 17 END
Line 1: One excellent feature added to IBM i 7.2 and subsequent 7.1 TRs is CREATE OR REPLACE. It keeps me from having to let go of the trigger before making the updated version. Previously, I would have just had CREATE. The remainder of the line specifies that NEW_CUSTOMER will be the name of my trigger.
Line 2: AFTER specifies that the trigger will take place subsequent to the database activity for updates, deletes, and inserts into my library’s TRGPF2 file.
Line 3: This line indicates that all fields/columns in the new row/record will have the prefix “N” and all fields in the previous row/record will have the prefix “O”.
Line 4: DB2ROW indicates that each row/record action will be followed by the trigger’s execution. Only once all row operations are finished will the alternative, DB2SQL, run.
Line 5: Indicates where the trigger code starts.
Line 6: A timestamp (or variable TSTAMP) is defined. This is what we’ll use to insert the update rows.
Lines 7-9: In the event that the action involved an insert, a row is added to the trigger output file. Only the updated values from the file are utilized because this is an insert.
Lines 10–12: This portion of the trigger, which puts the previous values into the output file, is executed when a deletion is carried out.
Lines 13–17: We would like both the old and new values for an update. Additionally, we need the timestamp in both rows to match. The timestamps in the two rows would have been different if we had used CURRENT TIMESTAMP. We can ensure that the timestamp value in both rows is the identical by relocating CURRENT TIMESTAMP to the variable specified on line 6.
Line 18: The code for trigger ends here, matching the BEGIN on line 5.
-
- Example#4 Conditional Triggers
CREATE OR REPLACE TRIGGER NEW_CUSTOMER AFTER UPDATE ON TRGPF1 REFERENCING NEW ROW AS NEW OLD ROW AS OLD FOR EACH ROW MODE DB2ROW WHEN(NEW.CUSTID <> OLD.CUSTID) BEGIN INSERT INTO TRGPF2 VALUES (CURRENT TIMESTAMP, JOB_NAME, 'U' ) ; END;
The line below retrieves the values from both the current and previous rows.
REFERENCING NEW ROW AS NEW OLD ROW AS OLD
Additionally, the condition that compares the value of the old and new College fields is shown below.
WHEN(NEW.COLLEGE <> OLD.COLLEGE)
- Example#5 Trigger Using Stored Procedure
CREATE OR REPLACE PROCEDURE SQLTRGPROC( IN P_CUSTID DECIMAL(9,0), IN P_FLAG CHAR(1) ) SPECIFIC SQLTRGPROC BEGIN IF P_FLAG = 'I' THEN INSERT INTO TRGPF1(CUSTID) VALUES(P_CUSTID); END IF; IF P_FLAG = 'D' THEN DELETE FROM TRGPF1 WHERE CUSTID = P_CUSTID; END IF; IF P_FLAG = 'U' THEN UPDATE TRGPF1 SET CUSTID = P_CUSTID WHERE CUSTID = P_CUSTID; END IF; END
The CALLPROCFROMTRIGGER trigger is generated in this example, and if it already exists, it is replaced with this trigger.
CREATE OR REPLACE TRIGGER CALLPROCFROMTRIGGER AFTER INSERT OR DELETE OR UPDATE OF NAME ON TRGPF1 REFERENCING NEW ROW AS NEW OLD ROW AS OLD FOR EACH ROW MODE DB2ROW PROGRAM NAME TRIGGER9 BEGIN DECLARE L_CUSTID DECIMAL(9,0); DECLARE L_FLAG CHAR(1); IF INSERTING THEN SET L_FLAG = 'I'; SET L_CUSTID = NEW.CUSTID; CALL SQLTRGPROC1(L_CUSTID, L_FLAG); END IF; IF DELETING THEN SET L_FLAG = 'D'; SET L_CUSTID = NEW.CUSTID; CALL SQLTRGPROC1(L_CUSTID, L_FLAG); END IF; IF UPDATING AND NEW.CUSTID <> OLD.CUSTID THEN SET L_FLAG = 'U'; SET L_CUSTID = OLD.CUSTID; CALL SQLTRPROC1(L_CUSTID, L_FLAG); END IF; END;
How to create SQL triggers
We use the Run SQL Statements command, RUNSQLSTM, add the trigger to the file. The CREATE TRIGGER creates an ILE C program in the library which is a trigger program.
OBJECT TYPE ATTRIBUTE TEXT TEST_00001 *PGM CLE SQL TRIGGER TEST_TESTFILE
How to see SQL triggers
As with the RPG trigger program if we want to see what trigger is on the file can use the Display File Description command, DSPFD…
DSPFD FILE(TESTFILE) TYPE(*TRG)
Or we can use the SYSTRIGGER view.
SELECT CAST(TABSCHEMA AS CHAR(10)) AS Table_library, CAST(TABNAME AS CHAR(10)) AS Table_name, TRIGTIME, EVENT_U,EVENT_I,EVENT_D, CAST(TRIGPGMLIB AS CHAR(10)) AS Trigger_library, CAST(TRIGPGM AS CHAR(10)) AS Trigger_program, TRIGNAME FROM QSYS2.SYSTRIGGER WHERE TABSCHEMA = 'MYLIB' AND TABNAME = 'TESTFILE'
Which gives:
TABLE_LIBRARY TABLE_NAME TRIGTIME EVENT_U EVENT_I EVENT_D MYLIB TESTFILE AFTER Y Y Y TRIGGER_LIBRARY TRIGGER_PROGRAM TRIGNAME MYLIB TEST_00001 TRG_TESTFILE
How to remove SQL triggers
If we only want to use the second trigger, then we need to remove the existing triggers from the file. We could use the Remove Physical File Trigger command, RMVPFTRG,…
RMVPFTRG FILE(PGMSDHTST3/WA042P)
Or we could use DROP TRIGGER:
DROP TRIGGER TRG_TESTFILE
This DROP TRIGGER leaves the other trigger, the one with only the delete, in place.
Subqueries
INTRODUCTION:
DB2 allows us to write queries within a query and this concept is called sub-querying. This is basically just writing a nested SQL inside another SQL statement.
Sub-querying allows us to look up data on a file based on a subset data from same/another file in the system.
Sub query can be written in following places:
- In the SELECT clause
- In the FROM clause
- In the WHERE clause using IN/NOT IN/ANY/ALL/EXISTS/NOT EXISTS keywords.
Let us use the following tables to understand subqueries and its possibilities better.
STUDMAST:
A student master file/table.
SUBMAST:
A subject master file/table which holds minimum marks to pass against each subject.
STUDMARKS:
A file/table to hold marks and results that each student has attained against each subject.
1. SUBQUERY IN SELECT CLAUSE:
Let us now look at the table STUDMARKS where all data is available, but nothing is readily understandable. No one can say which student scored how much against what subject just with the help of this single table.
We will now make an SQL with subqueries to display ‘Student Full Name’ and ‘Subject Description’, so that it becomes easier to understand the data that is presented.
Select Marks.Stud_Id, (Select (Trim(Stud.Stud_Fname)||' '||Trim(Stud.Stud_Lname)) From DEMO.STUDMAST Stud Where Stud.Stud_Id = Marks.Stud_ID) As Student_Name, (Select Sub.Sub_Desc From DEMO.SUBMAST Sub Where Sub.Sub_Id = Marks.Sub_Id), Marks.Marks, Marks.Results From DEMO.STUDMARKS Marks
This query would fetch name and subject description from tables STUDMAST and SUBMAST respectively using the respective key values and return text.
QUERY RESULT:
This makes the data much easier to understand for everybody.
2. SUBQUERY IN FROM CLAUSE:
Let’s say the above data needs to be filtered out more and we need to only display the records of students who have failed, the subset can be fetched as follows.
Select Marks.Stud_Id, (Select (Trim(Stud.Stud_Fname)||' '||Trim(Stud.Stud_Lname)) From DEMO.STUDMAST Stud Where Stud.Stud_Id = Marks.Stud_ID) As Student_Name, (Select Sub.Sub_Desc From DEMO.SUBMAST Sub Where Sub.Sub_Id = Marks.Sub_Id), Marks.Marks, Marks.Results From (Select * From DEMO.STUDMARKS Where Results = 'FAIL') As Marks
Though in this case the result could be achieved simply by placing the condition in the where clause. In real life, this result set could be narrowed down using complex queries over multiple files/tables and use that subset feed to out query.
3. SUBQUERY IN WHERE CLAUSE:
- USING IN/NOT IN:A simple SQL with a sub query to just give the names of students who have failed will be as follows:
Select * From DEMO.STUDMAST Stud Where Stud.Stud_Id In (Select Stud_Id From DEMO.STUDMARKS Where Results = 'FAIL')
QUERY RESULT:
A query to get the name of students who have not failed would go as follows:
Select * From DEMO.STUDMAST Stud Where Stud.Stud_Id Not In (Select Stud_Id From DEMO.STUDMARKS Where Results = 'FAIL')
QUERY RESULT:
- USING EXISTS/NOT EXISTS:The subquery creates a result set which becomes a lookup for the main query to filter out and display the final results. The result set achieved by using IN/NOT IN can also be achieved with the EXISTS/NOT EXISTS clause.Find the names of students who have failed as follows
Select * From DEMO.STUDMAST Stud Where Exists (Select * From DEMO.STUDMARKS Where Results = 'FAIL' And Stud_Id = Stud.Stud_ID)
QUERY RESULT:
Find the names of students who have not failed as follows
Select * From DEMO.STUDMAST Stud Where Not Exists (Select * From DEMO.STUDMARKS Where Results = 'FAIL' And Stud_Id = Stud.Stud_ID)
QUERY RESULT:
- USING ANY:The ANY keyword when used before a subquery determines if any value in the result subset matches to the any of the values in the left hand side (or the main query) of subquery and returns result if TRUE.The subquery for finding out if there are any students who have scored above 95 is as follows.
Select * From DEMO.STUDMAST Stud Where Stud. Stud_ID = ANY(Select Stud_ID From DEMO.STUDMARKS Where Marks >= 95)
QUERY RESULT:
- USING ALL:The ALL keyword when used before a subquery returns all values in the result subset that matches to the values in the left hand side (or the main query) of subquery and returns result if TRUE.The below query returns the list of subjects per student in which the students have scored above the average of the total marks per subject.
Select Mark.Stud_ID, (Select Sub.Sub_Desc From DEMO.SUBMAST Sub Where Sub.Sub_Id = Mark.Sub_Id), Mark.Marks, Mark.Results From DEMO.STUDMARKS Mark Where Mark.Marks >= ALL(Select AVG(MARKS) From DEMO.STUDMARKS GROUP BY Sub_Id)
QUERY RESULT:
Cursor
Introduction:
In RPGLE (Report Program Generator Language), a cursor is a database feature that allows you to work with a set of rows from a result set one at a time. Cursors are particularly useful when you need to interact with data stored in a relational database, such as IBM Db2 on the AS400 platform. Cursors offer precise control over retrieving, updating, or deleting rows from database tables.
A cursor contains information on the statement executed and the rows of data accessed by it. The set of rows the cursor holds is called the active set.
Basically, there are four types of cursors in RPGLE-
- Sequential/Serial Cursor
- Scrollable Cursor
- Sensitive Cursor
- Insensitive Cursor
Now before directly jump over the types of cursors, let us understand what major operations we generally do for retrieval of data from as400 data base using cursor
- Declare Cursor
- Open Cursor
- Fetch Operation
- Close Cursor
- Declare CursorTo declare the cursor, you just need to write the statement in below format-
Declare + Your Cursor Name + Cursor For + Your SQL Statement
Example in fix format:
Example in free format:
- Open CursorAfter declaring the cursor, you can open it using the ‘OPEN’ statement as explained below-
Open + Your Cursor Name
Example in fix format:
Example in free format:
- Fetch OperationAfter opening the cursor, you can retrieve the data from it using the fetch statement. You can write the fetch statement as below-
Fetch From + Cursor Name + Into + Variable names with colon and separated by commas.
Example in fix format:
Example in free format:
- Close CursorAfter fetching the data, you need to close the cursor using CLOSE keyword. You can write the statement as explained below-
Close + Your Cursor Name
Example in free format:
Sequential/Serial Cursor:
- In sequential cursor, we can fetch each row once after open cursor.
- In sequential cursor, we can fetch row only in forward direction that’s why it is also known as forward-only cursors.
- Once row is fetched, we can not move in any other direction within the data set.
- Sequential cursor is defined without scroll keyword.
- If we do not define the cursor type then by default it will be considered as sequential/serial cursor.
Fix Format Example of Serial Cursor
Free Format Example for Serial Cursor
Scrollable Cursor:
- Scrollable cursor is defied with keyword Scroll.
- Scrollable cursor provides us ability to navigate result set in both forward and backward direction.
- In scrollable cursor, we can move to any row within the result set.
- In scrollable cursor, we can fetch any row of result set multiple times.
- In scrollable cursor, row fetch from result set also depends on the keyword used with scroll.
There are following keywords that can be used:- NEXT – It will fetch the next row within the data set with respect to current row.
- PRIOR – It will fetch the previous row within the data set with respect to
current row. - FIRST – It will FETCH the first row in the results set.
- LAST – It will FETCH the last row in the results set.
- CURRENT – It will re-FETCH the current row from the result set.
- BEFORE – It will position the cursor before the first row of the results set.
- AFTER – It will position the cursor after the last row of the results set.
- RELATIVE n – It will fetch the nth with respect to current row within the data set.
here n represent the integer value that can be positive or negative. - ABSOLUTE n – If n is 0 then cursor will be positioned before the first row of the result
table. If n is positive then cursor will be positioned on the nth record of
result table from TOP. If n is negative then cursor will be positioned
on the nth record of the result table from BOTTOM.
Use of Next Keyword with Scroll Cursor in Fix Format:
Use of Next Keyword with Scroll Cursor in Free Format:
Use of Prior Keyword with Scroll Cursor in Free Format:
Use of Prior Keyword with Scroll Cursor in Free Format:
Use of First Keyword with Scroll Cursor in Fix Format:
Use of First Keyword with Scroll Cursor in Free Format:
Use of Last Keyword with Scroll Cursor in Fix Format:
Use of Last Keyword with Scroll Cursor in Free Format:
Use of Current Keyword with Scroll Cursor in Fix Format:
Use of Current Keyword with Scroll Cursor in Free Format:
Use of Before Keyword with Scroll Cursor in Fix Format
Use of Before Keyword with Scroll Cursor in Free Format
Use of After Keyword with Scroll Cursor in Fix Format
Use of After Keyword with Scroll Cursor in Free Format
Use of Relative Keyword with Scroll Cursor in Fix Format
Use of Relative Keyword with Scroll Cursor in Free Format
Use of Absolute Keyword with Scroll Cursor in Fix Format
Use of Absolute Keyword with Scroll Cursor in Free Format
Sensitive Cursor:
- Sensitive cursor has ability to detect changes made to the underlying data by the other processor or users while the cursor is active.
- This means that if another user or program modifies a row in the database table that your sensitive cursor is currently working with, the cursor can recognize that change.
- This helps you to keep your data up to date and avoid errors in a multi-user environment.
Fix Format Example
Free Format Example
Insensitive Cursor:
- Insensitive cursor doesn’t detect changes made to the underlying data by other processes or users while the cursor is active.
- This means insensitive cursor treats that data as static and doesn’t keep track of changes made by others.
- Insensitive cursors are helpful in situations where you don’t want to be affected by changes made by others or processes while you are working with specific data.
- They offer data consistency and can be more efficient than sensitive cursors in certain scenarios.
Free Format Example
Fix Format Example
Cursor in Dynamic/Embedded SQL
To use the cursor in dynamic SQL, we need to follow below steps-
- We need to store our SQL statement into a variable as string.
- Then we can prepare our SQL statement using ‘PREPARE’ keyword.
- Now we can declare the cursor for our prepared SQL statement.
- Now we can process our cursor as normal cursor.
Example of dynamic SQL without parameter marker:
Example of dynamic SQL with parameter markers:
Join
- Inner join
- Left outer join
- Right outer join
- Exception join
- Cross join
Inner join:
An inner join exclusively presents the rows from each table where there are corresponding values in the join columns. Rows lacking a match between the tables are omitted from the resulting table.
There are two ways to perform an inner join:
- JOIN syntax
- WHERE clause
- Using Join Syntax:
SELECT STUNO, LASTNAME, PROJNO, MARKS FROM GRAD.STUDENT INNER JOIN GRAD.PROJECT ON STUNO = STUID WHERE MARKS > 60
Consider the below example, where we need to fetch a student’s project details alongwith the student’s lastname, roll number, project number and marks. The student’s identity details are in table STUDENT and the project details are stored in PROJECT table. So, to identify what project is assigned to a student we would require a relationship between STUDENT and PROJECT table, this relationship can be called a common column on which the join will be performed. In our case those columns are STUNO and STUID.
- Using Where clause:To achieve the equivalent join as the JOIN syntax using the WHERE clause, include both the join condition and any additional selection condition within the WHERE clause. The tables intended for joining are specified in the FROM clause, separated by commas. See the example below.
SELECT STUNO, LASTNAME, PROJNO, MARKS FROM GRAD.STUDENT , GRAD.PROJECT WHERE STUNO = STUID and MARKS > 60
Left outer join:
A left outer join retrieves all the rows obtained from an inner join, along with an additional row for each unmatched row in the first table.
Consider a scenario where you aim to identify all students and their current project assignments, including those who aren’t currently overseeing any projects. The subsequent query will furnish the details of all students with marks greater than 60, along with the project numbers they’re assigned to.
SELECT STUNO, LASTNAME, PROJNO, MARKS FROM GRAD.STUDENT LEFT OUTER JOIN GRAD.PROJECT ON STUNO = STUID WHERE MARKS > 60
Right outer join:
A right outer join retrieves all the rows obtained from an inner join, along with an additional row for each unmatched row in the second table.
Consider a scenario where you aim to identify all projects and the assigned students, including projects which aren’t currently assigned to any student. The subsequent query will furnish the details of all the projects where project group is ‘SCIENCE’, along with the student details.
SELECT PROJNO, STUNO, LASTNAME, MARKS FROM GRAD.STUDENT RIGHT OUTER JOIN GRAD.PROJECT ON STUNO = STUID WHERE PROJGRP =’SCIENCE’
Exception join:
An exception selectively retrieves only the rows from the first table that lack a corresponding match in the second table based on the join condition.
Utilizing the identical tables as previously mentioned, we will fetch the student details who aren’t assigned to any projects.
SELECT STUNO, LASTNAME, PROJNO, MARKS FROM GRAD.STUDENT EXCEPTION JOIN GRAD.PROJECT ON STUNO = STUID WHERE MARKS > 60
Cross join:
A cross join, also referred to as a Cartesian Product join, produces a resultant table wherein each row from the first table is paired with every row from the second table. The quantity of rows in the resultant table equals the product of the row count in each table. The result of a cross join will include all the combinations of records from the two tables.
When the involved tables are extensive, this join operation may consume significant time. So, it is advised to filter the tables with a selection criterion that reduces the number of resulting rows as per your requirement.
Consider the tables in our previous examples viz. STUDENT and PROJECT, when a cross join will be performed each row in table STUDENT will be joined with every row in table PROJECT.
SELECT * FROM STUDENT CROSS JOIN PROJECT
Common Table Expression
A ‘Common Table Expression’ or a CTE is a temporary view that is created and used for executing SQL statement and destroyed at the end of execution. CTE can be used everywhere where an entire SQL statement can be written. CTEs improve the readability of the code and reduces repeated usage of same query within an SQL query.
CTE Syntax:
WITH cte_Name (Column_List) As
(CTE_Definition)
SQL_Statement;
Cte_Name – This would be the CTE name which will be used to refer the same in the desired SQL Statement.
Column_List – This would be the column list from the CTE_Definition. The number of columns defined here should match to what is defined within the CTE_Definition. The column names can be renamed here if required. This is optional.
CTE_Definition – This contains the SQL statement that needs to be defined for the CTE being created.
Let us use the following tables to understand CTEs better.
A student master file/table named STUDMAST.
A subject master file/table which holds minimum marks to pass against each subject, named SUBMAST.
A file/table to hold marks and results that each student has attained against each subject, named STUDMARKS.
A simple CTE over STUDMARKS table to find out the list of students that have failed can be written as follows:
With FAILEDSTUDS As ( Select Stud_Id, Sub_Id, Marks
From DEMO.StudMarks Where Results = 'FAIL') Select * From FAILEDSTUDS
QUERY RESULT:-
Here, FAILEDSTUDS is the CTE that is defined prior to writing the actual query itself. And the use of CTE makes this SQL look like one of the simplest select queries.
Another sample query to find out the subjects in which students are able to score at least an average mark of 50 can be written as follows:
With MarksAvg As ( Select Sub_ID, Avg(Marks) As Avg From DEMO.STUDMARKS Group By Sub_Id) Select S.*, A.Avg From DEMO.SUBMAST S, MarksAvg A Where S.Sub_Id = A.Sub_Id And A.Avg >= 50
QUERY RESULT:-
Simply, once the CTE is defined, the CTE name itself can be used in the query just like any other database table name and this gives us the freedom to use CTEs in any possible ways like joins, subqueries, etc.
CTEs can also be used with Insert statements
If we have a table with all the minute information about a subject in each record, and user requires a custom information to be extracted out of that data and move into another table, it can also be achieved in one single query.
We will use the same example of the subjects in which students are able to score at least an average mark of 50 to be inserted into another table of same structure as output columns as follows:
Insert into library/File_name
With MarksAvg As (
Select Sub_ID, Avg(Marks) As Avg
From DEMO.STUDMARKS
Group By Sub_Id)
Select S.*, A.Avg
From DEMO.SUBMAST S, MarksAvg A Where S.Sub_Id = A.Sub_Id And A.Avg >= 50
Merge
In simple terms, the MERGE statement compares key fields between two tables and then modifies one table based on the results of that comparison. This helps in managing data effectively. While the MERGE statement might seem more complicated than basic INSERTs or UPDATEs at first, once you grasp its concept, you’ll find it more convenient to use than handling INSERTs or UPDATEs separately.
Performance considerations for the SQL MERGE statement
The MERGE statement’s efficiency relies heavily on using the right indexes for matching the source and target tables. It’s important to optimize the join conditions and filter the source table to fetch only the required records for the statement to perform its tasks effectively.
Let’s look at example below to have a better understanding.
Imagine you have two tables named source and target. You’re tasked with updating the target table based on matching values from the source table. Here are three scenarios to consider:
- Some rows exist in the source table but not in the target table. In this situation, you’ll need to insert these rows from the source table into the target table.
- Both the source and target tables contain rows with identical keys, but their non-key column values differ. In this case, you’ll need to update the rows in the target table with the corresponding values from the source table.
- There are rows in the target table that don’t exist in the source table. Here, you can keep these unmatched rows.
Using INSERT, UPDATE, and DELETE statements separately requires constructing three distinct statements to update data in the target table with matching rows from the source table.
However, DB2 for i simplifies this process with the MERGE statement, enabling you to perform all three actions simultaneously. Here’s the example of the MERGE statement:
MERGE INTO tbl_target target USING tbl_source source ON target.EMPNO = source.EMPNO AND target.EMPDEP = source.EMPDEP WHEN NOT MATCHED THEN INSERT VALUES(EMPNO, EMPADR, EMPLVL, EMPSAL, EMPDEP) WHEN MATCHED THEN UPDATE SET EMPSAL = source.SALARY
The statement above compares rows in the target table with those in the source table based on the EMPNO and EMPDEP columns. These columns are the primary keys, so that unique records are selected.
For any row in the source table without a matching EMPNO and EMPDEP row in the target table (NOT MATCHED), a record INSERT is performed. It involves adding a new row to the target table, including the EMPNO,EMPADR, EMPLVL, EMPSAL and EMPDEP values from the source table.
On the other hand, for rows in the source table that do have corresponding rows in the target table (MATCHED), the EMPSAL value in the target table’s row is updated with the value from the source table.
Deletion using MERGE statement:
SQL MERGE can also be used to delete records from the table. Below is an example where employee having EMPLVL less than 2 are deleted from target table.
MERGE INTO tbl_target target USING tbl_source source ON target.EMPNO = source.EMPNO AND target.EMPDEP = source.EMPDEP. WHEN MATCHED and EMPLVL < 2 THEN DELETE WHEN MATCHED THEN UPDATE SET EMPSAL = source.SALARY
Jobs & Logs
Job Types and Execution
Jobs are used for performing out every task on a system. Within the system, every job has a unique number. All jobs run within subsystems, except for system jobs. A job may start from any work entry and enter the subsystem, including job queue, workstation, communications, AutoStart, and prestart entries.
Every active job has a minimum of one thread (the main thread) and could have more secondary threads, as well.
Threads are separate work units. The threads of a job share certain job attributes, but they also have some unique attributes of their own, like a call stack.
Information about the work’s processing is contained in the job’s attributes. When attributes are shared by threads inside the same job, the job serves as the owner. With the use of a job’s attributes, work management gives you the ability to control the work completed on your system.
Proper authority: Most changes to a job’s attributes require either your user profile to match the job user identity being changed or the control of job control special authority (*JOBCTL).
Job characteristics: Work management gives you the ability to manage the work completed on your system by using the attributes of a job. But first, you need to understand the different aspects of a job before you have control over its various aspects.
Job types: Your system processes several different types of jobs. This information describes those jobs and how they are used.
Job Types
Your system processes several different types of jobs. This information describes those jobs and how they are used.
AutoStart jobs
Batch jobs that perform repetitive tasks, one-time initialization tasks related to a specific subsystem, initialize functions for an application, or offer centralized service functions for other jobs in the same subsystem are known as AutoStart jobs. Other subsystems can be started using an AutoStart job in the controlling subsystem (as does the IBM-supplied controlling subsystem). Every time a subsystem is started, the AutoStart jobs associated with it are initiated automatically.
Batch jobs
A batch job is a set of predefined processing operations that are submitted to the system and are intended to be performed with minimal or no user-system interaction. Jobs that can be processed in batches are those that do not need user interaction to complete. A batch job has low priority and may need a specific system environment to run properly.
Communication jobs
A batch job that receives a program start request from a remote system is known as a communications job. Processing a job requires the right specifications and a communication request.
Interactive jobs
A job that begins when a user signs on to a display station and ends when the user logs off is known as an interactive job. The subsystem looks for the job description, which may be found in the user profile or the workstation entry, before allowing the job to run.
Prestart jobs
A batch job that starts running before a work request is received is known as a prestart job. In a subsystem, prestart jobs start before any other type of job. Prestart jobs are different from other jobs because they use prestart job entries (part of the subsystem description) to determine which program, class, and storage pool to use when they are started.
Reader and writer jobs
A spooled output job is a writer’s job, and a spooled input job is a reader’s job.
Server jobs
Server jobs are those that run on your system continuously in the background.
System jobs
The operating system creates system jobs to manage system resources and carry out system operations. When the server boots up or an independent disk pool is turned on, system jobs start to run. These jobs carry out several functions, such as scheduling jobs, initiating and terminating subsystems, and starting the operating system.
Job Execution
The Submit Job (SBMJOB) command can be used in IBM i to submit a job. You can specify the program or command to be run, as well as any input or output parameters, with this command. After being submitted, the job is placed in a job queue and awaits processing. In IBM i environments, SBMJOB offers scheduling flexibility and job execution automation.
Submitting a Batch Job
The command below submits the batch job with the name TESTJOB. The majority of the job’s attributes are derived from the Job description (JOBD) with the name TESTD in the library DEMOLIB; TESTD is used in the output queue (OUTQ) and TESTD is used in the message queue (MSGQ).
The job will be added to the JOBDs job queue, and TESTD is the JOBD associated with the JOBQ.
If JOBD is specified as *USRPRF and User mentioned as something other than *USER, then the attributes of the session for job run like library list, job queue, out queue, etc. are utilized that of the mentioned USER.
One can also select the Message logging level for the batch job in its outqueue spool. The below highlighted attributes of SBMJOB for Message logging allows it.
If one requires to list out all the messages and warnings from job run into the spool file irrespective of job ending normally or abnormally, the following values should be set for message logging:
Level – 4
Severity – 0
Text – *SECLVL
If it is required to list out all the messages and warnings from job run into the spool file only when job ends abnormally, the following values should be set for message logging:
Level – 4
Severity – 0
Text – *NOLIST
Job Description
All the attributes of the job description are saved in an object of type *JOBD.
Limitation
Once a job has started you cannot change the job description.
How Job picks the Job description?
Every user profile will have a job description (*JOBD) assigned to it. You can view the job description of a user profile using the command DSPUSRPRF. Most users will have default Job description QDFTJOBD.
Batch Job
When a user submits a job, that uses the job description (*JOBD) of the user. You always have the option to change it while submitting the job using SBMJOB command.
Interactive Job
When a user signs on, the system looks at the workstation entry in the subsystem description to determine what job description to use for the interactive job. If the workstation entry specifies *USRPRF for the job description, the job description in the user profile is used.
What all attributes a Job description contains?
You can see all attributes of a job description using command DSPJOBD.
E.g: DSPJOBD JOBD(QGPL/QDFTJOBD)
Attributes:
User profile: It is the name of the user profile associated with this job description.
CL Syntax Check: you can use *NOCHK to specify not to check for syntax as CL commands. Also, you can specify value between 0-99 to specify the lowest message severity that can cause running of a job to end.
Hold on job queue: Specifies whether jobs using this job description are put on the job queue in the hold condition.
End severity: Specifies the message severity level of escape messages that can cause a batch job to end.
Job date: Specifies the date that is assigned to the job that uses this job description when the job is started.
Job switches: Specifies the initial switch settings for a group of eight job switches used for jobs that use this job description. These switches are also called external indicators from U1 through U8.
Inquiry message reply: Specifies the way that inquiry messages are answered for jobs that use this job description.
Job priority: if is a job priority, you can specify a value from 1 to 9 where 1 is highest and 9 is lowest priority.
Job queue: The name and library of the job queue into which jobs using this job description are placed.
Output priority: The output priority for spooled output files that are produced by jobs using this job description. you can specify a value from 1 to 9 where 1 is highest and 9 is lowest priority.
Printer device: The name or reference of the printer device associated with the job description.
Output queue: The name of the output queue that is used as the default output queue for jobs that use this job description.
Message logging: it is a setting for what messages to log from the jobs use this job description.
Log CL program commands: Specifies whether the commands in a control language program are logged to the job log.
Job log output: Specifies how the job log will be produced when the job is completed.
Accounting code: Specifies an accounting code for jobs that use this job description. Account code can be 15 chars in length.
Print text: Specifies the line of text to be printed at the bottom of each page.
Routing data: Specifies the routing data used with this job description to start jobs.
Request data: It is placed as the last entry in the job’s message queue for jobs that use this job description. If you need a batch job to call a specific program every time at the end of its execution, you can specify the call command to that program in Request Data parameter.
DDM conversation: Specifies whether the connections using distributed data management (DDM) protocols remain active when they are not being used. DDMF files are examples that use the DDM protocols.
Device recovery action: Specifies the action to take when an I/O error occurs for the interactive jobs. It is ignored for batch jobs. We can specify actions like to end the job or message to application program etc.
Time slice end pool: Specifies whether interactive jobs should be moved to another main storage pool when they reach time slice end for system better performance.
Job message queue maximum size: Specifies the maximum capacity of the job message queue. It can be in the range of 2 to 64 megabytes.
Job message queue full action: Specifies what action the system takes when the job message queue is full. You can specify options like to end job or print the messages etc.
Allow multiple threads: Specifies whether the job is allowed to run with multiple user threads, but not system threads.
Initial ASP group: Specifies the initial setting for the auxiliary storage pool (ASP) group name for the initial thread of jobs using this job description.
Spooled file action: Specifies whether spooled files are accessed through job interfaces after the job ends. Keeping the spool files with jobs allows job commands such as Work with Submitted Jobs (WRKSBMJOB) to work with the spooled files even after the job has ended. Removing them will clear the memory for better system performance.
Workload group: It is the name of the workload group that is used by jobs that use this job description.
Text: It is the description of the JOBD object user can specify.
Initial Library List: It is a list of libraries that are used when a job starts using this job description. If interactive job, you can edit libraries from the library list. If it is batch job you can change the library list using the CL commands inside the program. So, it is just the initial library list only.
How to work with JOB descriptions?
Use WRKJOBD to work with job descriptions.
E.g.: WRKJOBD JOBD(QGPL/*ALL)
By using the options on this screen, we can create, change, copy, delete and display the job descriptions. Alternatively, we can use the commands below.
CRTJOBD – To create a job description.
CHGJOBD – To change a job description.
DLTJOBD – To delete a job description.
DSPJOBD – To display a job description.
How to assign job description to Jobs?
Interactive Job: For an interactive job, a user has to login. So, we can assign the Job description to a user profile. So, whenever the user logs into the system, that interactive session will use the assigned job description.
We can assign a job description to user profile while creating the user profile using CRTUSRPRF command or changing the user profile use CHGUSRPRF command.
Batch Job:
The SBMJOB command has option to assign the job description.
Job Log
- The commands in the job
- The commands in CL program, if CL program was created with the LOG(*YES) option or with the LOG(*JOB) option and a change job (CHGJOB) is run with the LOGCLPGM(*YES) option.
- All messages and message help sent to the requester and not removed from the program message queues.
At the end of the job, the job log can be written to the output file QPJOBLOG so that it can be printed. After the job log is written to the output file, the job log is deleted.
Controlling information written in a job log
To control what information the system writes in the job log, specify the LOG parameter on the Create Job Description (CRTJOBD) command. You can change the levels by using the Change Job (CHGJOB) command or the Change Job Description (CHGJOBD) command.
Three values make up the LOG parameter: message level, message severity, and message text level.
The first value, message level, has the following levels:
Level | Description |
---|---|
0 | No data is logged. |
1 | The only information to be logged is all messages sent to the job’s external message queue with a severity greater than or equal to the message severity specified. Messages of this type indicate when the job started, when it ended, and its status at completion. |
2 | The following information is logged:
|
3 | The following information is logged:
|
4 | The following information is logged:
|
The second value, message severity, specifies the severity level in conjunction with the log level that causes error messages to be logged in the job log. Values 0 through 99 are allowed.
The third value in the LOG parameter, message text level, specifies the level of message text that is written in the job log. The values are:
- *SAME
- The current value for the message text level does not change.
- *MSG
- Only message text is written to the job log (message help is not included).
- *SECLVL
- The message and the message help (cause and recovery) are written to the job log.
Displaying a job log
The way to display a job log depends on the status of the job.
- The Work with Job Logs (WRKJOBLOG) command can be used to display pending job logs for completed jobs, all job log spooled files, or both. For example, to display the list of pending job logs for all jobs that have ended, enter: WRKJOBLOG JOBLOGSTT(*PENDING)
- If the job is active or in a job queue, or if the job log is pending, use the Display Job Log (DSPJOBLOG) command. For example, to display the job log of the interactive job for user JSMITH at display station WS1, enter: DSPJOBLOG JOB(nnnnnn/JSMITH/WS1), where nnnnnn is the job number.
- If the job has ended and the job log is written to an output file but is not yet printed, use the Display Spooled File (DSPSPLF) command, as follows: DSPSPLF FILE(QPJOBLOG) JOB(001293/FRED/WS3). to display the job logs for job number 001293 associated with user FRED at display station WS3.
- To display the job log of your own interactive job, do one of the following:
- Enter the command: DSPJOBLOG OR
- Enter the WRKJOB command and select option 10 (Display job log) from the Work with Job display.
- Press F10=Include detailed messages from the Command Entry display (this key displays the messages that are shown in the job log).
- Use the cursor movement keys to get to the end of the job log. To get to the end of the job log quickly, press F18 (Bottom). After pressing F18, you might need to roll down to see the command that is running.
- Use the cursor movement keys to get to the top of the job log. To get to the top of the job log quickly, press F17 (Top).
- To display the job log in command WRKACTJOB:
- Use the WRKACTJOB command.
- Take option 5 against the job.
- Type option 10 to see the job log.
- Press F10=Include detailed messages from the Command Entry display (this key displays the messages that are shown in the job log).
- Use the cursor movement keys to get to the end of the job log. To get to the end of the job log quickly, press F18 (Bottom). After pressing F18, you might need to roll down to see the command that is running.
- Use the cursor movement keys to get to the top of the job log. To get to the top of the job log quickly, press F17 (Top).
Preventing the production of job logs
- To prevent a job log from being produced at the completion of a batch job, you can specify *NOLIST for the message text-level value of the LOG parameter on the Batch Job (BCHJOB), Submit Job (SBMJOB), Change Job (CHGJOB), Create Job Description (CRTJOBD), or Change Job Description (CHGJOBD) command.
- If you specify *NOLIST for the message level value of the LOG parameter, the job log is not produced at the end of a job unless the job end code is 20 or greater. If the job end is 20 or greater, the job log is produced.
- For an interactive job, the value specified for the LOG parameter on the SIGNOFF command takes precedence over the LOG parameter value specified for the job.
- To prevent a job log from being produced when the job is completed, but remain in the system in a pending state, specify *PND for the LOGOUTPUT parameter on the Submit Job (SBMJOB), Change Job (CHGJOB), Create Job Description (CRTJOBD), or Change Job Description (CHGJOBD) command. If you specify *NOLIST for the LOG parameter, no job log will be produced, and there will be no pending job log either. Pending job logs will only be available when a job log would normally be written to an output file or database file when the job ends and the job log output job attribute is *PND. You can use the Work with Job Logs (WRKJOBLOG) command to find both pending and written job logs.
Job log from programming perspective
- Write to job log, we can send messages from inside a running RPG program. We can do it by using one of IBM’s APIs – Qp0zLprintf
The procedure prototype is as below.
Example
- Using SQL to get information from Job Logs.
We can get job log information with SQL Select query from table JOBLOG_INFO. It returns one row for each job log message.
SELECT * FROM TABLE(QSYS2.JOBLOG_INFO('*')) A;
The parameter ‘*’ says to retrieve the information from the current job. If we wanted to look at a different job we would change the parameter to the fully qualified jobname such as:
SELECT * FROM TABLE(QSYS2.JOBLOG_INFO('878597/QUSER/QZDASOINIT')) A;
If we wanted to see the job log in reverse order, we can do this with:
SELECT * FROM TABLE(QSYS2.JOBLOG_INFO('*')) A ORDER by ordinal_position desc;
to retrieve only the last message in the joblog.
SELECT message_id, message_text, message_second_level_text FROM TABLE(QSYS2.JOBLOG_INFO('*')) A ORDER by ordinal_position desc fetch first row only;
System Logs
IBM i has two types of logs for messages. They are:
- Job log – Log generated for any job on the system, whether it could be interactive or submitted batch job, is called a job log.
- History log/System log – Whenever a system activity is successful or abended with some error, a message will be generated by the system. All such messages can be viewed at History log (System Log). Message can be associated with a job/system/device status or a system operator message.
How to view the System log?
- DSPLOG command can be used to see the system log.
Note: Since the result is read only, it is safe to run this command in ALL environments.
Command:
Result:
To see the additional information, take F1 on corresponding message.
Parameters:
On typing the command and taking prompt (F4), corresponding parameters can be seen.
Log: The only option available for this parameter is QHST, which means system history log.
Time Period: This parameter can be used to view the system log for a certain period. Beginning Time/Date and Ending Time/Date can be input for corresponding time frame.
Time should be specified in 24-hour format and date should be specified in the date format the current job uses.
For example, the input below gives the system log between 1 PM and 3 PM for the current date.
Output: This parameter has four options.
* – Result will be shown on the display.
*PRINT – Spool file will be generated for the output. Only the first 105 characters of first
level message text will be printed for each message.
*PRTWRAP – Spool file will be generated for the output. Up to 2000 characters of first
level message text will be printed for each message.
*PRTSECLVL – Spool file will be generated for the output. Up to 2000 characters of first
level message text and 6000 characters of second level message text will be printed for each message.
Additional Parameters: After taking prompt(F4) on the command, press F10 for additional parameters and move the page down to input the additional parameters.
Job: This parameter can be used to see the system log associated to specific job or specific user.
For example, the below input gives the system log associated to the user SUPERUSER.
Message Identifier: This parameter can be used in two ways.
*ALL – All messages irrespective of message identifiers, will be included in the output.
Specific/Generic – If looking for specific message then corresponding message identifier can be input. If looking for certain message types, then generic format can be input.
For example, to see the job started messages CPF1124 can be input.
Message Identifier Selection: This parameter is associated with Message Identifier and can be used in two ways.
*INCLUDE – All the messages as per the Message Identifier input, will be included in the output.
*OMIT – All the messages as per the Message Identifier input, will be excluded in the output.
For example, to exclude the job started messages input CPF1124 in Message Identifier and *OMIT in the selection.
Benefits of System log:
- To know the start/end time of a job from the past.
Let us assume that a job is scheduled in production to run on daily basis; and start/end time is needed to present in the daily dashboard. In some cases, timelines are needed from the past. So, DSPLOG command can be used with Job name, date, message identifiers as CPF1124/CPF1164 and selection as *INCLUDE to know the timelines.
- To analyze the production issue.
Let us assume that an issue is reported in production, which could be a run-time exception. If you know the exact job which caused the issue, then you can check the job log and find the root cause. If no job details are available, then DSPLOG command can be executed with the suspected time frame and check for exception message. Take F1 on message and then F9 to know the exact job/program & analyze the issue.
Hence, System Log provides useful information and can be used in many scenarios whenever needed.
Job Schedule
Scheduling a job using schedule entry related commands
A job can be scheduled and viewed/modified on job schedule using the below commands:
- (a) ADDJOBSCDE (Add Job Schedule Entry)
- This command is used to schedule a batch job by adding an entry to the job schedule.
Below are the important parameters that need to be considered while adding a job to the schedule:- JOB: It is the name of the job which appears on the schedule list once added and it will also be the name of the job which will be submitted.
- CMD: It is the command that will run when the job is submitted on the specified schedule, generally it’s the call command for a program.
- FRQ: It is the frequency of the job submission. Below are the possible values for this parameter and when to use these values:
- *ONCE: If a job needs to be submitted only one time.
- *WEEKLY: If a job needs to be submitted daily OR on a few selected days of the week.
- *MONTHLY: If a job needs to be submitted on a few selected dates of the month (like the first date of every month).
- *YEARLY: If a job needs to be submitted only once a year.
- SCDDAY/SCDDATE: These are the parameters for scheduled days & scheduled dates for the job.For jobs that are supposed to be executed daily or on specific days of every week, the SCDDATE parameter is supplied as *NONE, and the SCDDAY parameter is supplied with the name of the days.For jobs that are supposed to be submitted once, monthly, OR yearly; the SCDDAY parameter is supplied as *NONE, and the SCDDATE parameter is updated as a specific date, *MONTHSTR or *MONTHEND.
- SCDTIME: It is the time of the day at which the job is supposed to be executed.
- JOBD:The job description parameter specifies the JOBD using which the job will be executed. The JOBD contains the list of libraries which will be used by the job to perform the required actions given in program.
- JOBQ: This is the JOBQ in which the job will be submitted, generally for batch jobs it is kept as QBATCH.
- USER: It specifies the name of the user using whose authorities the scheduled job will be executed. Possible values are:
- *CURRENT – The job is submitted under the user profile of the user who is adding the job schedule entry.
- *JOBD – The job will be submitted using the user profile of the JOBD.
- A specific name of the user profile can also be provided.
Below are some examples of adding a new job schedule entry for different scenarios:
- Scheduling a one-time job
The above command translates to “The job TODAYRPT will be submitted on 25th March 2024 at 10:30 PM on JOBQ ‘QBATCH’ using profile AZARU to call the program GENDAILYRP from library PGMLIB”.
- Scheduling a job to execute a program on daily basis:
The above command translates to “The job DAILYRPT will be submitted on ALL days of every week (i.e. daily) at 10:30 AM in the morning on JOBQ ‘QBATCH’ using profile AZARU to call the program GENDAILYRP from library PGMLIB”.
- (b) WRKJOBSCDE (Work with Job Schedule Entries)
- This command allows the user to work with the existing scheduled entries of the jobs to change, remove, display, hold, release, check the last submission details of the job OR submit the job immediately if needed.Below are some sample executions of this command:
- Change the scheduled job.
Provide the name of the job (if want to work only with specific job) and press enter.
Option 2 can be taken to change the job (i.e. change any parameter which was used to add the scheduled entry of the job). Let’s assume that we want to change the scheduled time of “DAILYRPT” job from 10:30 am to 11:00 am. On taking option 2 below screen appears:
We can change the time as below and press enter to update the scheduled entry of the job.
*It can be observed that when option 2 is taken on the scheduled entry, the command CHGJOBSCDE is executed. This command can directly be used to change the parameter of scheduled entry from command line.
- Hold/Release the scheduled job.In case it is needed to hold the job from getting submitted for the next execution (or release the scheduled entry of the job if it was held before), option 3 can be taken (option 6 can be taken to release the scheduled entry).
As we can see above the job is currently in SCD (Scheduled) status and it will be submitted at 11:00 AM, to stop it from getting submitted at 11:00 AM, option 3 can be taken as below:
Job status gets changed to “HLD” (Held) as shown above.
To release this job (or bringing it back to SCD status) option 6 can be taken as below:
First the job status is changed from “HLD” to “RLS” (released), on pressing F5 status gets updated correctly to SCD
*Scheduled entries can be held/released by commands HLDJOBSCDE/RLSJOBSCDE as well
- Work with last submission.To work with the most recent submission of the job by scheduled entry, option 8 can be taken.
- Submit immediately.If we want to submit the scheduled entry immediately, we can take option 10.
*We can observe, by submitting the job immediately didn’t change any this on “next submit date” column. So, submitting the job using option 10 does not impact the scheduled entry of the job. It can also be achieved using SBMJOB command.
- Remove job schedule job.If we want to remove the scheduled entry, we can take option 4 to do the same:
System asks for the confirmation, press enter again.
Scheduled entry gets removed.
*A scheduled entry can also be removed using command RMVJOBSCDE.
- Change the scheduled job.
- (c) How to know the scheduled job entry number of all scheduled entries
- There are commands like CHGJOBSCDE, HLDJOBSCDE, RLSJOBSCDE, RMVJOBSCDE which require you to supply job entry number along with the scheduled job name (in case there are multiple scheduled entries available with the same name).The job entry number is automatically assigned by system when we add a new scheduled entry in system using ADDJOBSCDE (or by pressing F6 from WRKJOBSCDE command).
To see all the scheduled job details in system (along with scheduled job entry number), we can either print all scheduled entries details using WRKJOBSCDE command OR alternatively we can also use the view “SCHEDULED_JOB_INFO” (Short name SCHED_JOB) which is available in QSYS2 library.
It can be accessed using SQL session and data filtering criteria can also be applied.
Scheduling a job using submit job command.
In case it is required to run a job not immediately but at a particular date/day and time (once only), it can also be achieved using SMBJOB command.
We can supply the parameters “SCDDATE” and “SCDTIME” for this purpose.
SCDDATE can either be a specific date OR a name of day (*MON/*TUE etc.) or month start/month end (*MONTHSTR/*MONTHEND) as well. If needed to submit the job on the same day but at some other time, we can consider not to provide SCDDATE parameter (keep it as *CURRENT) and provide only SCDTIME parameter.
SCDTIME parameter is used to provide the time at which the job will be submitted.
Below is a sample to schedule a job using SMBJOB command:
- Sample one time execution scheduling using SBMJOB command.
The above command will schedule the job to run at 11:00 AM on 26th Apr 2024.
The scheduled entries will reside in JOBQ with status SCD till the time to submit is reached. We can see these entries using WRKJOBQ command.
On entering command WRKJOBQ JOBQ(QBATCH), below entry appears:
*If needed we can change, hold, end, or release the job from the JOBQ. The corresponding commands are CHGJOB, HLDJOB, ENDJOB, RLSJOB to do the same activities.
Scheduling a job using the Advance job scheduler
IBM has also provided an advanced job scheduler package which comes with more flexibilities and options for user which scheduling the jobs (like having holiday calendar, group jobs to run one after other, multiple submissions of a job at different times on same day etc.).
The scheduler can be accessed using the menu command “GO JS” and different options can be used to add/update/remove a job in advance job scheduler. Moreover, different calendars can also be maintained.
IBM has provided a red book for its manual which can be accessed below, you can go through this red book to understand different options of advance job scheduler.
Job Switches
The group of eight logical values that can be used to pass information to a submitted job or to exchange information between programs running in the same job are known as Job Switches in IBMi. Logical values indicate only 0’s (false) and 1’s (true) to be specified in the 8-digit character string, but not other character values.
These switches can be set or tested in a CL program and used to control the flow of the program.
How we can pass information to a submitted job via Job Switches?
The Job Switches can be used as an appropriate way to pass information to a submitted job. When a job enters the system, the initial job switches value can be set via the following ways:
- Using the Job Switches Parameter (SWS) of a Submit Job Command
- Job Switch attribute of the JOBD object used to start the job.
Specifying the initial value of the Job Switches of a submitted job via SWS Parameter in a SBMJOB CMD as below:
SBMJOB CMD (CL-Command-To-Run)
SWS (01010010) /* This 8-Byte Character Value ‘01010010’ is the example */
/* for the Job Switches value to pass to the submitted job */
Specifying the initial value of the Job Switches by setting up the JOBD parameter to JOBD object as below:
/* By assuming each job switch indicates whether a specific task among */
/* all the eight tasks need to be done by the submitted job */
CRTJOBD JOBD (DONOTHING)
SWS (00000000)
CRTJOBD JOBD (EVERYTHING)
SWS (11111111)
SBMJOB CMD (CL-Command-To-Run)
JOBD (DONOTHING) /* The Submitted Job here need not do anything */
SBMJOB CMD (CL-Command-To-Run)
JOBD (EVERYTHING) /* The Submitted Job here needs to do everything */
Accessing the Job Switches
By using CL Commands and Built-in-Function, we can access the Job Switches.
With the help of below DSPJOB command, we can check the Job Switch settings of the current interactive job.
DSPJOB OPTION(*DFNA)
The CL Switch Built-in Function (%SWITCH) tests one or more of the eight switches of the current job under an 8-character mask and returns a logical value of ‘0’ or ‘1’.
Syntax of %SWITCH BIF:
%SWITCH(8-Character-Mask)
Only 0, 1 or X can be specified as the valid characters in every position of the mask.
Following are the list of indications of the above valid characters for %SWITCH BIF.
- ‘0’ – The corresponding job switch is to be tested for a 0 (off).
- ‘1’ – The corresponding job switch is to be tested for a 1 (on).
- ‘X’ – The corresponding job switch is not to be tested (X). The value in the switch does not affect the result of %SWITCH.
Simple Job Switch Example in CL
IF COND(%SWITCH(0XX1XXXX) THEN (GOTO HOME)
From the above example, we can understand that if the Job Switch# 1 contains ‘0’, and Job Switch# 4 contains ‘1’, then the program will branch to label HOME. The remaining Job Switches 2, 3, 5, 6, 7, and 8 will not be tested.
Other CL Commands used to get or set the Job Switches
With the help of RTVJOBA and CHGJOB command, we can get or set one or more Job Switches of the current job or another job, respectively.
CHGJOB SWS(XXX1XXXX)
The above CL command changes the value of the fourth Job Switch value of the current job to ‘1’.
Accessing the Job Switches via RPG PGM
OPM RPG & ILE RPG supports Job Switches via external indicators (U1 through U8). At the beginning of the RPG Program cycle, the current Job Switches settings are copied to indicators U1-U8 and at the end of the program cycle, the indicator values of U1-U8 are copied back to the Job Switches.
Also, noting that the returning from an RPG Program with the LR indicator set off will not get the Job Switches settings of the updated current job.
RPG PGM Example on how Job Switches settings are updated via external indicators:
D SET_OFF S N INZ(*OFF)
D SET_ON S N INZ(*ON)
/Free
*INU1 = SET_OFF;
*INU2 = SET_ON;
*INLR = *ON;
/End-Free
Note:
Although, the RPG compiler will stop assigning constant values other than ‘0’ or ‘1’ (*ON or *OFF) to an indicator variable, it is still possible to change the contents of external indicators with non-logical values. Since, assigning a character variable to an indicator variable is allowed in RPG.
Hence, to modify a Job Switch value in an ILE RPG PGM via a stand-alone variable, use an Indicator Type (N) character variable instead of a type A character variable to avoid the non-logical values assigning issue.
Debug Batch Jobs
A job which does not require user interaction to run is known as a Batch job. In simple terms, batch job is a scheduled program which runs without or minimum user interaction, once batch job is submitted it waits for its turn in job queue, on its turn job starts automatically in the background even if user signs-off.
IBM i system provides facility to schedule batch jobs.
- Schedule daily, weekly, monthly, start or end of the month and so on.
Example-A batch job is scheduled to run 10.00 PM daily to take back up of important application files.
Also, batch jobs can be held until a certain time.
Submitting a batch job:
Commonly used command for submitting batch job is Submit Job (SBMJOB) command.
Example:
- In this example, a job named DELETESPL is submitted in QBATCH job queue. The job runs the command DLTSPLF.
SBMJOB CMD(DLTSPLF FILE(*SELECT)) JOB(DELETESPL) JOBQ(QBATCH)
- In this example, Job named DAILYBKUP is submitted in QBATCH job queue.
The job runs the program named BKUPFILES.SBMJOB CMD(CALL PGM(*LIBL/BKUPFILES)) JOB(DAILYBKUP) JOBQ(QBATCH)
Debugging a batch job:
Unlike interactive jobs, batch jobs require specific steps to perform debugging.
Following are the list of steps and recommendation for debugging batch jobs.
- HOLD the Job: – Debug command should be executed before job starts. Hence, the job we want to debug should be on hold or yet to start. To achieve this-
- Use the submit job (SBMJOB) command with parameter HOLD(*YES) to prevent job from executing immediately.
SBMJOB CMD(CALL PGM(*LIBL/BKUPFILES)) JOB(DAILYBKUP) JOBQ(QBATCH) HOLD(*YES)
The job will be submitted with status "HELD".
- In real time applications, usually the jobs are submitted from programs with HOLD(*NO). In such cases, when SBMJOB is coded within a program, we can hold the job queue to which the job is being submitted using Hold Job Queue (HLDJOBQ) command. Once job is submitted, we can then hold the specific job with HLDJOB command or option 3 = Hold on WRKUSRJOB screen and release the job queue with Release Job Queue (RLSJOBQ) command.
NOTE: Job queue must be released ASAP, since holding it for long time will impact execution of other jobs submitted to same job queue. - Schedule date/time can be specified in SBMJOB command using parameters SCDDATE and SCDTIME.
- Use the submit job (SBMJOB) command with parameter HOLD(*YES) to prevent job from executing immediately.
- Find the job attributes: Note down the attributes of the submitted job. This can be done using following method-
- Run command WRKUSRJOB STATUS(*JOBQ) and Enter option 5.
Note down the values Job, User, Number displayed at the top of the next screen "Work with Job".
- Run command WRKUSRJOB STATUS(*JOBQ) and Enter option 5.
- Start service job: The service job will allow to interactively debug the batch job.
Enter command STRSRVJOB with the job attributes noted in last step.STRSRVJOB JOB(667098/HIMANSHUGA/DAILYBKUP)
- Start debug:
- Enter STRDBG command and enter the program name(s) you want to debug.
STRDBG PGM(BKUPFILES) UPDPROD(*YES)
Module source is displayed on debug screen.
- At this point, debug commands cannot be used. Since job is not active, press F12 to return to command line and release the held job using RLSJOB command or using option 6 = Release on WRKUSRJOB screen.
Below screen is displayed then..
- Press F10=Command entry on above screen takes to "Command Entry" screen and type Display Module Source (DSPMODSRC) command and press ENTER; Module source is displayed again on debug screen. Break points can be added at this point.
ADD break points.
- Now press F3 or F12 to return to the "Command Entry" screen and press F12 again to return to the "Start Serviced Job" screen.
Press ENTER to start the job and continue with debugging.
- Enter STRDBG command and enter the program name(s) you want to debug.
- End debug and End service job:
Once debug completes enter the ENDDBG command to end the debug and ENDSRVJOB command to end the service operations of the job that was being serviced.
How can we help you?
We have hundreds of highly-qualified, experienced experts working in 70+ technologies.