Database testing. Database testing

: How to test and debug databases

Automatic unit testing of application code is simple and straightforward. How to test a database? Or an application that works with a database. After all, a database is not just program code, a database is an object that saves its state. And if we start changing the data in the database during testing (and without this, what kind of testing will we have?!), then after each test the database will change. This may interfere with subsequent tests and permanently corrupt the database.

The key to solving the problem is transactions. One of the features of this mechanism is that as long as the transaction is not completed, you can always undo all changes and return the database to the state at the time the transaction began.

The algorithm is like this:

  1. open a transaction;
  2. if necessary, we carry out preparatory steps for testing;
  3. perform a unit test (or simply run the script whose operation we want to check);
  4. check the result of the script;
  5. We cancel the transaction, returning the database to its original state.

Even if there are unclosed transactions in the code under test, the external ROLLBACK will still roll back all changes correctly.

It's good if we need to test a SQL script or stored procedure. What if we are testing an application that itself connects to the database, opening a new connection? In addition, if we are debugging, then we will probably want to look at the database through the eyes of the application being debugged. What to do in this case?

Don't rush to create distributed transactions, there is a simpler solution! Using standard SQL server tools, you can open a transaction on one connection and continue it on another.

To do this, you need to connect to the server, open a transaction, obtain a token for that transaction, and then pass this token to the application under test. It will join our transaction in its session and from that moment on, in our debugging session we will see the data (and also feel the locks) exactly as the application under test sees it.

The sequence of actions is as follows:

Having started a transaction in a debug session, we must find out its identifier. This is a unique string by which the server distinguishes transactions. This identifier must somehow be passed to the application under test.

Now the application’s task is to bind to our control transaction before it starts doing what it’s supposed to do.

Then the application starts working, including running its stored procedures, opening its transactions, changing the isolation mode... But our debugging session will all this time be inside the same transaction as the application.

Let's say an application locks a table and starts changing its contents. At this moment, no other connections can look into the locked table. But not our debugging session! From there we can look at the database in the same way as the application does, since the SQL server believes that we are in the same transaction.

While for all other sessions the application's actions are hidden by locks...

Our debugging session passes through the locks (the server thinks they are our own locks)!

Or imagine that the application starts working with its own versions of strings in SNAPSHOT mode. How can I look into these versions? Even this is possible if you are connected by a common transaction!

Don't forget to roll back the control transaction at the end of this exciting process. This can be done both from the debugging session (if the testing process completes normally) and from the application itself (if something unexpected happens in it).

You can learn more about this in the courses

Database testing is necessary to ensure the functionality of the database. To do this, queries are drawn up in the database of various types: for sampling, with calculated fields, parametric, with data grouping, for updating, for deleting.

Example query: Display a list of books taken by a specific student. Set your full name as a parameter.

Example query: Display a list of books by a specific author indicating storage locations in the library. Set the author's full name as a parameter.

Example request: Determine by library card number which class the corresponding student is in and who his class teacher is.

Rice. 15. Query 3. “Find a student by library card number and determine in which class he is studying”

Example request: Determine by Student_Full name in which class the debtor is studying and who his class teacher is.

For the convenience of working with records of various tables, it was created, with which you can open any table necessary to view, update and change information. The button form is shown in Fig. 17.

Rice. 17. Database button form

CONCLUSION

The final qualifying work was carried out on the current topic “Development of an information system for a rural school library.”

The goal of the diploma design to develop an information system for the school library of the Saratov region, Fedorovsky district of the municipal educational institution secondary school in the village of Solnechny has been achieved.

During the graduation project the following tasks were solved:

Consider the library as an element of the educational environment;

Study the government concept of supporting and developing children's reading;

Technologies of work of libraries of educational institutions are analyzed;

The subject area is described based on the survey;

-technical specifications for development have been developed information system for the library of a rural school;

- a functional model of the school library’s activities was built;

- description of input and output information flows;

an information system based on a DBMS has been developedACCess;

- the developed relational database was tested.

In the final qualifying work for the construction of an information system that provides automation of manual operations to ensure the processes of storage, search, accounting for issuance and return by students, a technical specification was developed based on an analysis of the results of a survey of the subject area. The technical specifications (TOR) reflected the requirements of the system users – library workers.

Based on the technical specifications, a functional model of the activities of a rural school library has been developed. The functional model, in turn, served as material for identifying non-automated areas in the library’s work.

The choice of a DBMS as a development environment was determined by the technical capabilities of the rural library. As a result, based on the DBMS AccessThe core of the information system – the database – was built.

For the convenience of users, a push-button interface has been developed.

Corresponding queries have been developed to test the database. Completing these queries allows us to judge the normal performance of the information system for a rural school library.

BIBLIOGRAPHICAL LIST

Databases and database processes should be tested as an independent subsystem. In this case, all subsystems without the target user interface as an interface to the data must be tested. Additional research should be done in the database management system (DBMS) to determine the tools and techniques to support the testing defined in the following table.

Objectives of the methodology:

Testing of database access methods and processes independent of the UI, so that malfunctioning target algorithms or data corruption can be observed and recorded.

Methodology:

Call each database access method or process, populating each one with valid and invalid data or data requests.

Validating the database to ensure that the data is populated correctly and that all database events occur as expected, or validating the returned data to ensure that the correct data is retrieved when necessary.

Oracles:
Required Tools:

SQL database utilities and tools

data generation tools

Success criteria:

This technique supports testing of all major database access methods and processes.

Special information:

Testing may require a DBMS development environment or drivers to enter or change data directly in the database.

Processes must be called manually.

Small databases or databases minimum size(with a limited number of entries) should be used to extend the scope of all corrupted events.

Function testing

Test target function testing should focus on requirements that can be traced directly to use cases or business process functions and rules. The purpose of these tests is to verify the correct acceptance, processing and return of data, as well as the appropriate implementation of business process rules. This type of testing is based on black box techniques, which means testing an application and its internal processes is done by interacting with the application using a Graphical User Interface (GUI) and analyzing the findings or results. The following table defines the testing framework recommended for each application.

Objectives of the methodology:

Test the functionality of the testing target, including navigation, input, processing and return of data to monitor and record the target algorithm.

Methodology:

Test threads or functions and functionality separate use cases for each use case scenario, using valid and invalid data to verify that:

when using valid data, expected results are obtained

If invalid data is used, appropriate error or warning messages are displayed

all business process rules are applied accordingly

Oracles:

Outline one or more strategies that can be used in the technique to correctly observe test results. An oracle combines elements of both the method by which an observation can be made and the characteristics of a particular outcome that indicate possible success or failure. Ideally, oracles will perform self-checking, allowing for initial assessment of success or failure by automated tests. However, you should be aware of the risks associated with automatically determining results.

Required tools:

This technique requires the following tools:

Test Script Automation Tool

image creation and base recovery tool

backup and recovery tools

installation monitoring tools (registry, HDD, CPU, memory and so on)

data generation tools

Success criteria:

all major use case scenarios

all basic functions

Special information:

Identify or describe those elements or issues (internal or external) that affect the implementation and performance of the feature test.

Business Process Cycle Testing

Business process cycle testing should emulate tasks performed over time on<Имя проекта>. You should define a period, such as one year, and perform transactions and tasks that will occur during the year. This includes all daily, weekly and monthly cycles, as well as date-based events.

Objectives of the methodology:

Test the test target processes and background processes according to the required business process models and plans to monitor and record the target algorithm.

Methodology:

Testing simulates several cycles of a business process by doing the following:

The tests used to test the features of the test target will be modified or extended to increase the number of times each feature is executed to simulate multiple different users over a given period.

All date or time dependent functions will be performed using valid and invalid dates and periods.

All functions that are executed periodically will be executed or started at the appropriate time.

Testing will use valid and invalid data to test the following:

When valid data is used, the expected results are obtained.

If invalid data is used, appropriate error or warning messages are displayed.

All business process rules are applied accordingly.

Oracles:

Outline one or more strategies that can be used in the technique to correctly observe test results. An oracle combines elements of both the method by which an observation can be made and the characteristics of a particular outcome that indicate possible success or failure. Ideally, oracles will perform self-checking, allowing for initial assessment of success or failure by automated tests. However, you should be aware of the risks associated with automatically determining results.

Required tools:

This technique requires the following tools:

Test Script Automation Tool

image creation and base recovery tool

backup and recovery tools

data generation tools

Success criteria:

This technique supports testing of all critical business process cycles.

Special information:

System dates and events may require special supporting tasks.

A business process model is required to identify relevant requirements and testing procedures.

User Interface Testing

User interface (UI) testing tests the user's interaction with the software. The goal of UI testing is to ensure that the UI provides the user with appropriate access and navigation to the features of the testing target. UI testing also helps ensure that objects in the UI perform as expected and comply with corporate or industry standards.

Objectives of the methodology:

Test the following to monitor and record compliance with the standards and target algorithm:

Navigation of the test target, reflecting the functions and requirements of the business process, including window-window, field-field methods, and the use of access methods (tab keys, mouse movements, keyboard shortcuts).

You can test window objects and characteristics such as menus, size, layout, state, and focus.

Methodology:

Create or modify per-window tests to verify correct navigation and object states for each application window and object.

Oracles:

Outline one or more strategies that can be used in the technique to correctly observe test results. An oracle combines elements of both the method by which an observation can be made and the characteristics of a particular outcome that indicate possible success or failure. Ideally, oracles will perform self-checking, allowing for initial assessment of success or failure by automated tests. However, you should be aware of the risks associated with automatically determining results.

Required tools:

This technique requires a test script automation tool.

Success criteria:

This technique supports testing of every home screen or window that will be widely used by users.

Special information:

Not all properties of custom objects and third-party objects can be accessed.

Performance Profiling

Performance profiling is a performance test that measures and evaluates response times, transaction speeds, and other time-sensitive requirements. The purpose of performance profiling is to verify that performance requirements have been achieved. Performance profiling is implemented and performed to profile and refine the performance algorithms of a test target as a function of conditions such as workload or hardware configuration.

Note: The transactions listed in the following table are classified as "logical business process transactions". These transactions are defined as specific use cases that an entity is expected to perform using a testing goal, such as adding or modifying a given contract.

Objectives of the methodology:

Test algorithms for specified functional transactions or business process functions under the following conditions to observe and record target algorithm and application performance data:

Methodology:

Apply testing procedures designed to test business process functions and cycles.

Modifying data files to increase the number of transactions or scripts to increase the number of iterations performed in each transaction.

Scripts should be run on the same system ( best option- start with one user, one transaction) and repeat with multiple clients (virtual or actual, see specific information below).

Oracles:

Outline one or more strategies that can be used in the technique to correctly observe test results. An oracle combines elements of both the method by which an observation can be made and the characteristics of a particular outcome that indicate possible success or failure. Ideally, oracles will perform self-checking, allowing for initial assessment of success or failure by automated tests. However, you should be aware of the risks associated with automatically determining results.

Required tools:

This technique requires the following tools:

Test Script Automation Tool

application performance profiling tool such as Rational Quantify

installation monitoring tools (registry, hard drive, CPU, memory, etc.)

Success criteria:

This technique supports testing:

Single transaction or single user: Successfully emulate transaction scenarios without failing due to test implementation issues.

Multiple transactions or multiple users: Successfully emulate the workload without failing due to test implementation issues.

Special information:

Comprehensive performance testing includes background load on the server.

There are several methods that can be used, including the following:

"Delivery of transactions" directly to the server, usually in the form of Structured Query Language (SQL) calls.

Creating a "virtual" user load to simulate several clients, usually several hundred. To achieve this load, remote terminal emulation tools are used. This technique can also be used to flood the network with a "data stream".

To create a load on the system, use several physical clients, each of which runs test scripts.

Performance testing should be performed on a dedicated system or at a dedicated time. This provides complete control and accurate measurements.

Databases used for performance testing must either be the actual size or be scaled equally.

Load testing

Load testing is a performance test in which the test target is subjected to different workloads to measure and evaluate the performance algorithms and the ability of the test target to continue to function appropriately under different workloads. The purpose of load testing is to determine and ensure that the system will operate correctly when subjected to the expected maximum operating load. Load testing also evaluates performance parameters such as response time, transaction speed, and other time-related parameters.

Note: The transactions listed in the following table are classified as "logical business process transactions". These transactions are defined as specific functions that the user is expected to perform while using the application, such as adding or changing a given contract.

Objectives of the methodology:

Execute designated transactions or business process variations under varying workload conditions to monitor and record target algorithm and system performance data.

Methodology:

Use transaction test scripts designed to test business process functionality and cycles as a basis, but remove unnecessary iterations and delays.

Modifying data files to increase the number of transactions or tests to increase the number of times each transaction is executed.

Workloads should include - for example, daily, weekly and monthly - peak loads.

Workloads should represent both average and peak loads.

Workloads must represent both instantaneous and long-term peak loads.

Workloads should be tested in different test environment configurations.

Oracles:

Outline one or more strategies that can be used in the technique to correctly observe test results. An oracle combines elements of both the method by which an observation can be made and the characteristics of a particular outcome that indicate possible success or failure. Ideally, oracles will perform self-checking, allowing for initial assessment of success or failure by automated tests. However, you should be aware of the risks associated with automatically determining results.

Required tools:

This technique requires the following tools:

Test Script Automation Tool

installation monitoring tools (registry, hard drive, CPU, memory, etc.)

resource limiting tools; for example, Canned Heat

data generation tools

Success criteria:

This technique supports workload emulation testing, which is a successful emulation of the workload without failures due to test implementation issues.

Special information:

Load testing should be performed on a dedicated system or at a dedicated time. This provides complete control and accurate measurements.

Databases used for load testing must either be the actual size or be scaled equally.

Voltage testing

Stress testing is a type of performance test implemented and performed to understand how a system fails under conditions at or outside expected tolerances. It usually involves resource scarcity or competition for resources. Low-resource conditions indicate how a test goal fails in a way that is not obvious under normal conditions. Other flaws may result from contention for shared resources, such as database locks or throughput network, although some of these tests are typically performed during feature and load testing.

Objectives of the methodology:

Test the functions of the test target under the following voltage conditions in order to observe and record the target algorithm that identifies and documents the conditions under which the failure system in order to continue to operate accordingly:

small amount of memory or lack of free memory on the server (RAM and permanent memory)

the maximum physically or actually possible number of connected or simulated users

multiple users perform the same transactions with the same data or accounts

"excessive" transaction volume or a mixture of conditions (see Performance Profiling section above)

Methodology:

To test limited resources, tests should be run on a single system, and the RAM and persistent memory on the server should be reduced or limited.

For other voltage tests, multiple clients should be used, running the same or additional tests to generate the worst-case transaction volume or a mixture of both.

Oracles:

Outline one or more strategies that can be used in the technique to correctly observe test results. An oracle combines elements of both the method by which an observation can be made and the characteristics of a particular outcome that indicate possible success or failure. Ideally, oracles will perform self-checking, allowing for initial assessment of success or failure by automated tests. However, you should be aware of the risks associated with automatically determining results.

Required tools:

This technique requires the following tools:

Test Script Automation Tool

transaction load planning and management tool

installation monitoring tools (registry, hard drive, CPU, memory, etc.)

resource limiting tools; for example, Canned Heat

data generation tools

Success criteria:

This technique supports voltage emulation testing. A system can be successfully emulated under one or more conditions, defined as stress conditions, and observations of the resulting state of the system can be recorded during and after the condition is emulated.

Special information:

To create stress on the network, network tools may be required to load the network with messages and packets.

The persistent memory used for the system should be temporarily reduced to limit the available space for database growth.

You should synchronize the simultaneous access of clients to the same data records or accounts.

Capacity testing

Capacity testing exposes the test target to large volumes of data to determine whether limits have been reached that cause the system to fail. Capacity testing also determines the continuous maximum load or capacity that the testing target can drive during a given period. For example, if a test target is processing a set of database records to generate a report, the capacity test will use a large test database and verify that the software is running normally and generating the correct report.

Objectives of the methodology:

Test the test target's functionality in the following high-capacity scenarios to observe and record the target algorithm:

The maximum (actual or physically possible) number of connected or simulated clients performing the same (worst in terms of performance) business process function over a long period.

Achieved maximum size(actual or scale) database and multiple queries or reporting transactions are running simultaneously.

Methodology:

Use tests designed for performance profiling or load testing.

Multiple clients should be used running the same or additional tests to generate the worst case volume of transactions or a mixture of them (see stress testing) over an extended period.

The maximum size of the database (actual, scaled, or populated with representative data) is created and multiple clients are used simultaneously to run queries and reporting transactions over an extended period.

Oracles:

Outline one or more strategies that can be used in the technique to correctly observe test results. An oracle combines elements of both the method by which an observation can be made and the characteristics of a particular outcome that indicate possible success or failure. Ideally, oracles will perform self-checking, allowing for initial assessment of success or failure by automated tests. However, you should be aware of the risks associated with automatically determining results.

Required tools:

This technique requires the following tools:

Test Script Automation Tool

transaction load planning and management tool

installation monitoring tools (registry, hard drive, CPU, memory, etc.)

resource limiting tools; for example, Canned Heat

data generation tools

Success criteria:

This technique supports capacity emulation testing. Large numbers of users, data, transactions, or other aspects of system usage can be successfully emulated and observed changes in system state throughout capacity testing.

Special information:

Security and access control testing

Security and access control testing focuses on two key areas of security:

Application-level protection, including access to data or business process functions

System-level security, including login or remote system access

Based on the required level of protection, application-level protection ensures that subjects only have access to certain features or use cases, or that the data available to them is limited. For example, entry of data and creation of new accounts can be allowed to everyone, but deletion - only to managers. If there is data-level security, then testing ensures that the “type 1 user” has access to all customer information, including financial data, while the “type 2 user” only has access to demographic data about that same customer.

System-level security ensures that only users with system permissions have access to applications, and only through the appropriate gateways.

Objectives of the methodology:

Test the testing target under the following conditions in order to observe and record the target algorithm:

Application-level protection: the subject has access only to those functions and data for which a user of this type has access rights.

System-level security: Access to applications is limited to those with system and application permissions.

Methodology:

Application-level security: Define and list all user types and the functions and data that each type of user is allowed to access.

Create tests for each user type and check all access rights by creating transactions defined for each user type.

Changing the user type and rerunning tests for the same users. In each case, checking that access to additional functions or data is correctly allowed or denied.

System Level Access: See Special Information below.

Oracles:

Outline one or more strategies that can be used in the technique to correctly observe test results. An oracle combines elements of both the method by which an observation can be made and the characteristics of a particular outcome that indicate possible success or failure. Ideally, oracles will perform self-checking, allowing for initial assessment of success or failure by automated tests. However, you should be aware of the risks associated with automatically determining results.

Required tools:

This technique requires the following tools:

Test Script Automation Tool

"Hacker" tools for testing and finding security holes

OS security administration tools

Success criteria:

This methodology supports testing of the relevant features and data affected by security settings for each known user type.

Special information:

System access should be verified and discussed with appropriate system or network administrators. This testing may not be necessary because it may be part of network or system administration functions.

Testing disaster recovery

Disaster recovery testing verifies that the test target can be successfully recovered from a variety of hardware, software, and network failures with extensive data loss or data integrity.

For systems that must continue to operate, disaster recovery testing ensures that, in the event of a recovery from a failure, alternative or backup systems correctly "take over" for the system that failed without losing data or transactions.

Recovery testing is an antagonistic testing process in which an application or system is exposed to extreme conditions or simulated conditions that cause failure, such as device I/O failures or invalid database pointers and keys. Recovery processes are invoked and the application or system is monitored and controlled to verify that proper recovery application or system and data.

Objectives of the methodology:

Simulate failure conditions and test recovery processes (manual and automatic) of the database, applications, and system to the desired known state. To monitor and record the operation algorithm after recovery, the following types of conditions are included in testing:

power interruption in the client system

power interruption in the server system

connection interruption through network servers

interrupted connection or loss of power to DASD (direct memory access devices) and DASD controllers

incomplete cycles (interruption of data filtering processes, interruption of data synchronization processes)

invalid pointers and database keys

invalid or corrupted data items in the database

Methodology:

You can use tests already created to test business process functionality and cycles as a basis for creating a series of transactions to support recovery testing. The first step is to identify tests for recovery success.

Power interruption in the client system: Power off the PC.

Server system power interruption: Simulates or initiates power-down procedures for the server.

Interrupt via network servers: Simulates or initiates a loss of network connection (physically disconnecting connecting wires or turning off power to network servers or routers).

Interruption of connection or loss of power to DASDs and DASD controllers: simulation or physical loss of connection to one or more DASDs or DASD controllers.

When the above or simulated conditions are reached, additional transactions should be executed and recovery procedures should be invoked when this second test step is reached.

When testing incomplete loops, the same methodology is used as described above, except that the database processes themselves must be interrupted or terminated prematurely.

Testing the following conditions requires reaching a known database state. Several database fields, pointers, and keys must be corrupted manually and directly in the database (using database tools). Additional transactions should be performed using tests from application feature testing and business process cycles and fully completed cycles.

Oracles:

Outline one or more strategies that can be used in the technique to correctly observe test results. An oracle combines elements of both the method by which an observation can be made and the characteristics of a particular outcome that indicate possible success or failure. Ideally, oracles will perform self-checking, allowing for initial assessment of success or failure by automated tests. However, you should be aware of the risks associated with automatically determining results.

Required tools:

This technique requires the following tools:

image creation and base recovery tool

installation monitoring tools (registry, hard drive, CPU, memory, etc.)

backup and recovery tools

Success criteria:

This technique supports testing:

One or more simulated failures involving one or more combinations of applications, database, and system.

One or more simulated recoveries, involving one or more combinations of applications, database, and system, to a known desired state.

Special information:

Recovery testing is largely intrusive. Procedures for disconnecting electrical cables (when simulating power or connection loss) may not be desirable or feasible. Alternative methods such as diagnostic software tools may be required.

Requires resources from systems (or computer operations), databases, and network groups.

These tests should be performed outside of normal business hours or on an isolated system.

Configuration testing

Configuration testing verifies the performance of the test target under different hardware and software configurations. In most work environments, the specific hardware specifications for client workstations, network connections, and database servers may vary. Client workstations may have different software loaded (eg, applications, drivers, and so on), and many different combinations of software may be active at the same time, using different resources.

Objectives of the methodology:

Verifies the test target in the required hardware and software configurations to observe and record the target algorithm in different configurations and determine differences in configuration status.

Methodology:

Application of function tests.

Opening and closing various non-test related software, such as Microsoft® Excel® and Microsoft® Word® applications, either as part of a test or before running a test.

Execute selected transactions to simulate entities interacting with the test target and non-test target software.

Repeat the above process, minimizing the available main memory on the client workstation.

Oracles:

Outline one or more strategies that can be used in the technique to correctly observe test results. An oracle combines elements of both the method by which an observation can be made and the characteristics of a particular outcome that indicate possible success or failure. Ideally, oracles will perform self-checking, allowing for initial assessment of success or failure by automated tests. However, you should be aware of the risks associated with automatically determining results.

Required tools:

This technique requires the following tools:

image creation and base recovery tool

installation monitoring tools (registry, hard drive, CPU, memory, etc.)

Success criteria:

This technique supports testing of one or more combinations of test target elements executed in the expected supported development environments.

Special information:

What non-target software is required, available, and accessible on the desktop?

What applications are commonly used?

What data do applications operate on? for example, a large spreadsheet open in Excel, or a 100-page document in Word?

As part of this test, you should also document the network, network servers, and system databases in general.

Testing the installation

Testing the installation has two purposes. The first is to make sure that the software can be installed (eg new installation, update, and full or custom installation) under various standard and non-standard conditions. Unusual conditions include insufficient disk space, insufficient permissions to create directories, and so on. The second purpose is to verify that the software works correctly after installation. Typically this is accomplished by running a series of tests designed to test functionality.

Objectives of the methodology:

Perform a test target installation on each required hardware configuration under the following conditions to observe and record installation behavior and configuration state changes:

new installation: new system, which has never been installed before<Имя проекта>

update: system on which it was previously installed<Имя проекта>, same version

version update: system on which it was previously installed<Имя проекта>, earlier version

Methodology:

Develop automated or manual scripts to test target system conditions.

new:<имя проекта>never installed

<имя проекта>the same or earlier version has already been installed

Start or complete installation.

Applying a predetermined subset of function testing scenarios, executing transactions.

Oracles:

Outline one or more strategies that can be used in the technique to correctly observe test results. An oracle combines elements of both the method by which an observation can be made and the characteristics of a particular outcome that indicate possible success or failure. Ideally, oracles will perform self-checking, allowing for initial assessment of success or failure by automated tests. However, you should be aware of the risks associated with automatically determining results.

Required tools:

This technique requires the following tools:

image creation and base recovery tool

installation monitoring tools (registry, hard drive, CPU, memory, etc.)

Success criteria:

This technique supports testing the installation of a developed product in one or more installation configurations.

Special information:

What transactions<имя проекта>should be chosen to provide a reliable test that the application<имя проекта>was installed successfully and that no important software components were missing?

Database testing not as common as testing other parts of the application. In some tests database generally get wet. In this article I will try to look at tools for testing relational and NoSQL databases.

This situation is due to the fact that many databases are commercial and the entire necessary set of tools for working with them is supplied by the organization that developed this database. However, the growing popularity of NoSQL and various MySQL forks in the future may change this state of affairs.

Database Benchmark

Database Benchmark is a .NET tool designed to stress test databases with large data streams. The application runs two main test scenarios: inserting a large number of randomly generated records with sequential or random keys, and reading inserted records ordered by their keys. It has extensive capabilities for generating data, graphical reports and configuring possible types of reports.

Supported databases: MySQL, SQL Server, PostgreSQL, MongoDB and many others.

Database Rider

Database Rider is designed to allow testing there was a database was no more difficult than unit testing. This tool is based on Arquillian and therefore the Java project only needs a dependency for DBUnit. It is also possible to use annotations, as inJUnit, integration with CDI via interceptors, support for JSON, YAML, XML, XLS and CSV, configuration via the same annotations or yml files, integration with Cucumber, support for multiple databases, working with temporary types in datasets.

DbFit

DbFit - development framework Database through testing. It's written on top FitNesse, which is a mature and powerful tool with a large community. Tests are written based on tables, which makes them more readable than regular ones unit - tests. You can run them from the IDE, using the command line, or using CI tools.

Supported databases: Oracle, SQL Server, MySQL, DB2, PostgreSQL, HSQLDB and Derby.

dbstress

dbstress is a database performance and stress testing tool written in Scala and Akka. Using a special JDBC-driver, it executes requests in parallel a certain number of times (possibly even to several hosts) and saves the final result in csv file.

Supported databases: all the same as in JDBC.

DbUnit

is an extension JUnit (also used with Ant), which between tests can return database to the desired state. This feature allows you to avoid dependencies between tests; if one test fails and at the same time violates the database, then the next test will start from scratch. DbUnit has the ability to transfer data between a database and an XML document. It is also possible to work with large datasets in streaming mode. You can also check whether the resulting database matches a certain standard.

Supported databases: all the same as in JDBC.

DB Test Driven

DB Test Driven is a tool for unit - testing Database. This utility is very lightweight, works with native SQL and is installed directly into the database. Easily integrates with continuous integration tools, and the SQL Server version has the ability to evaluate code coverage by tests.

Supported databases: SQL Server, Oracle.

HammerDB

HammerDB - load and benchmark testing tool Database. This is an automated application that is also multi-threaded and has the ability to use dynamic scripts. jav with open source code and comparison tool. It is automated, multi-threaded and extensible with support for dynamic scripts.

JdbcSlim

JdbcSlim offers easy integration of queries and commands into Slim FitNesse. The project's focus is on storing a range of configurations, test data and SQL. This ensures that the requirements are written independent of the implementation and are understandable to business users. JdbcSlim does not have database-specific code. He is agnostic about the specifics of a database system and has no specific code for any database system. In the framework itself, everything is described at a high level; the implementation of any specific things occurs by changing just one class.

Supported databases: Oracle, SQL Server, PostgreSQL, MySQL and others.

JDBDT (Java DataBase Delta Testing)

JDBDT is a Java library for testing (SQL-based) database applications. The library is designed for automated installation and testing of the database tests. JDBDT has no dependencies on third-party libraries, which simplifies its integration. Compared to existing database testing libraries, JDBDT is conceptually different in its ability to use δ-assertions.

Supported databases: PostgreSQL, MySQL, SQLite, Apache Derby, H2 and HSQLDB.

NBi

NBi is essentially an addon for NUnit and is intended more for Business Intelligence spheres. In addition to working with relational databases, it is possible to work with OLAP platforms (Analysis Services, Mondrian, etc.), ETL And reporting systems(Microsoft technologies). The main goal of this framework is to create tests using a declarative approach based on XML. You won't need to write tests in C# or use Visual Studio to compile tests. You just need to create an xml file and interpret it using NBi, then you can run the tests. In addition to NUnit, it can be ported to other test frameworks.

Supported databases: SQL Server, MySQL, PostgreSQL, Neo4j, MongoDB, DocumentDB and others.

NoSQLMap

NoSQLMap is written in Python to perform robustness auditing sql - injections and various exploits in the database configuration. And also to assess the resistance of a web application using NoSQL databases to this type of attack. The main goals of the application are to provide a tool for testing MongoDB servers and dispelling the myth that NoSQL applications are impenetrable to SQL injection.

Supported databases: MongoDB.

NoSQLUnit

NoSQLUnit is an extension for JUnit designed for writing tests in Java applications that use NoSQL databases. Target NoSQLUnit- manage life cycle NoSQL. This tool will help you maintain the databases you are testing in a known state and standardize the way you write tests for applications using NoSQL.

Supported databases: MongoDB, Cassandra, HBase, Redis and Neo4j.

ruby-plsql-spec

ruby-plsql-spec framework for unit testing PL/SQL using Ruby. It is based on two other libraries:

  • ruby-plsql – Ruby API for calling PL/SQL procedures;
  • RSpec is a framework for BDD.

Supported databases: Oracle

SeLite

SeLite is an extension from the Selenium family. The main idea is to have a database based on SQLite, isolated from the application. You will be able to detect web server errors and fumble scripts between tests, work with snapshots, etc.

Supported databases: SQLite, MySQL, PostgreSQL.

sqlmap

sqlmap is a penetration testing tool that can automate the process of detecting and exploiting SQL injections and taking over database servers. It is equipped with a powerful detection engine and many niche pentesting features.

Supported databases: MySQL, Oracle, PostgreSQL, SQL Server, DB2 and others.

    Open source tools for database testing

    https://site/wp-content/uploads/2018/01/data-base-testing-150x150.png

    Database testing is not as common as testing other parts of the application. In some tests, the database is completely mocked. In this article I will try to analyze tools for testing relational and NoSQL databases. This situation is due to the fact that many databases are commercial and the entire necessary set of tools for working with them is supplied by an organization that […]

1) Goals and objectives…………………………………………………………... 3

2) Description of the database …………………………………………………... 4

3) Working with the database …………………………………………………… 6

4) Load testing of the database……………………………...11

5) Conclusion……………………………………………………………………....15

6) Literature………………………………………………………………....16

Goals and objectives

Target: create a database of elixirs for the game The Witcher 3, which will contain information about the type of elixirs, their properties, what they are made from, places where they can be found and about the monsters against which they can be used. Create optimized queries for this database and load test it.

Tasks:

· Create a database schema with at least 5 entities in MYSQL Workbench. Describe these entities and their connections.

· Describe the use of this database, describe the main queries, look at their execution time and draw conclusions

· Database optimization

· Perform load testing using apache-jmeter. Use extensions for it to build graphs.

Database Description

The course work uses the created Witcher1 database, the main entities of which are tables:

Fig.1 Schematic representation of the Witcher1 database

The Ingridients table contains the necessary ingredients to create elixirs in the game, which are described in the Elixirs table. Several ingredients are used to create an elixir, but each one is unique to its elixir. It is for this reason that a 1: n (one-to-many) relationship was established between these tables, which is shown in the database diagram (Fig. 1).

The Ingridients table also contains information about the names of the ingredients (Discription) and where this ingredient can be found (WhereFind). The idElixirs column is a linking column for the Ingridients and Elixirs tables.

The Elixirs table contains information on how to use a specific elixir and the name of that elixir. This table is the key table for the other tables.

The Locations table contains information about which location or city a specific ingredient can be found in.

Table IL contains consolidated information about where and how to find a specific ingredient in a given area and what it is. An n:m (many to many) relationship was established between the Ingridients and Locations tables, since multiple ingredients can be found in multiple locations, as indicated in the IL child table.

The Monsters table contains information about the types of monsters in

“Witcher 3”, about how to recognize this or that monster and the names characteristic of them.

The ML table is a child table of the n: m union of the Location and Monsters tables and contains specific information about how to defeat this particular monster and what elixirs can be used for this, including special witcher signs, as well as in what area and What signs should you use to look for this particular type of monster?

Working with the database

The Witcher1 database contains information about which elixirs should be used against which monsters, special tactics for especially dangerous monsters, such as: Pestilence Maiden, Devil, Imp, Goblin, etc. Analyzing information from each table in order will take a lot of time, so we will create special queries to the database that will be as useful as possible for the user.

· A request for information on how to find a specific monster.

This query will contain the keyword JOIN, thanks to which the ML and Monsters tables of the Witcher1 database will be accessed.

This request will look like this:

ml JOIN monsters ON monsters.idMonsters=ml.idMonsters;

After executing the query, we will receive a fairly large table as output, which is the result of combining two tables. So that the displayed table is not so huge, you can specify which monster to display information about. That is, for, for example, Him, the request will look like this:

monsters.MonstersName, monsters.MonstersDiscription,

ml.DiscriptionHowFind, ml.idLocations

ml JOIN monsters ON monsters.idMonsters=ml.idMonsters

where monsters.MonstersName=’Hym’;

Which monster this or that ID corresponds to can be found out by querying the Monsters or ML tables. The queries will look like this:

SELECT SELECT

IdMonsters, MonstersName idMonsters, MonstersName

FROM ml; FROM monsters;

To check compliance, you can query both the ML and Monsters tables, first joining them by idMonsters.

ml.idMonsters, monsters.MonstersName

ml JOIN monsters ON

ml.idMonsters=monsters.idMonsters

ORDER BY monsters.idMonsters;

· A request for what elixir is suitable for this monster.

To implement this request, a JOIN will be used. The query will be addressed to two tables Elixirs and Monsters and will contain information about when and what elixir to drink in the fight against the monster:

monsters.MonstersName ,elixirs.ElixirName, elixirs.ElixirDiscription

elixirs JOIN monsters ON

elixirs.idElixirs=monsters.idElixirs;

· A query about what ingredient is found in a particular area.

To implement this request, a JOIN will be used. The query will be addressed to two tables Ingridients and Locations and will contain information about which ingredient is located in which location and information about its type:

ingridients.Discription, locations.Discription, ingridients.WhereFind

ingredients JOIN locations ON

ingridients.idIngridients=locations.idIngridients

ORDER BY ingredients.Discription;

· UPDATE requests

We implement this query for a monster in the Monsters table named Hym. Let's say we want to change his name to Him:

monsters

SET MonstersName="Him"

where idMonsters=1;

But, since Hym is correct in the English version, let’s return everything back:

monsters

SET MonstersName="Hym"

where idMonsters=1;

Fig.2. Implementing UPDATE queries

· "Aggregation" queries. COUNT and COUNT(DISTINCT)

The COUNT function counts the number of non-empty rows (not NULL inside them) in a given table. COUNT has an optimized version for displaying the number of rows when used for 1 table. For example:

Fig.3. Count rows in the Elixirs, Monsters, and Monsters JOIN elixirs tables.

The COUNT(DISTINCT) function is used to display the number of non-repeating rows in tables and is a more optimized version of the COUNT family of functions:

Fig.4. Counting non-repeating elixirs in the Monsters table.

· DELETE function.

Let's add another row to the Elixirs table using INSERT:

INSERT INTO elixirs VALUES (6,'ForDelete','DiscriptionDelete');

Fig.5. Adding a row to the Elixirs table.

Now let's make a request to delete this line, since there is no need for an elixir that will not help in any way in the fight against monsters:

DELETE FROM elixirs WHERE idElixirs=6;

Fig.6. Delete the added line.

Database Load Testing

Now that queries have been completed and access to the database has been established, it can be tested using several parameters:

· Response Times Over Time or Response Times over Time – this check displays information for each request about its average response time in milliseconds.

· Response Times Distribution or Response Time Distribution - this check displays the number of responses in a certain time interval during which the request was executed.

· Response Time Percentiles – This check displays percentiles for response time values. On the graph, the X axis will be percentages, and the Y axis will be the response time.

For the most plausible tests, we will set certain

options:

Fig.7. Test parameters

Number of Threads(users) – Number of virtual users. In our case, we will set it to 1000 in order to load our database as much as possible.

Ramp-Up Period – the period during which all users will be involved.

We will check all JOIN requests for their performance when activated simultaneously by several users.

The last 3 points are plotters of the checks by which we will test the database.

·
Checking Response Times Over Time

Fig.7. The result of executing queries during the test Response Times Over Time

As can be seen from the graph, the most difficult request to complete was “Monsters&Locations” and required the longest response time. You can verify the reason for the long execution of the request by running the request in the console. The main reason for this delay is that both the Monsters table and the ML table contain lengthy explanations of monsters or where to find them. Because of this, the request takes quite a long time to complete.

·
Examination Response Times Distribution

Fig.8. The result of executing queries during the test Response Times Distribution.

From this graph we can conclude that the number of responses for each of our requests in the same period of time is the same.

·
Checking Response Time Percentiles

The ordinate axis shows the execution time, and the abscissa axis shows percentages of the total quantity. Based on the graph, we can conclude that 90% of requests are executed in the time interval from 0 to 340 milliseconds, and from 5% to 15% the number of requests increases linearly, and then exponentially with a very small coefficient of increase.

The remaining 10% is executed in the time interval from 340 milliseconds to 700 milliseconds, which leads to the conclusion that there is a very large load on the database.

Conclusion

In this course work all tasks were completed. The database was designed, filled with data, the main possibilities of its use in the form of queries and their execution results were shown.

At the end, testing and analysis of its results were carried out, with subsequent conclusions.

It should be noted that the database itself was created as only an educational one, so it is not so voluminous.

Another important characteristic is security: passwords, if such a table is created, must be stored in encrypted form and protected from unauthorized access.

Literature

1. http://phpclub.ru/mysql/doc/- Internet resource “MySQL - reference guide”

2. Schwartz B., Zaitsev P., Tkachenko V. et al. - MySQL. Optimizing Performance (2nd Edition)

3. Thalmann L., Kindal M., Bell C. – “Ensuring high availability of MySQL-based systems”

4. Andrzej Sapkowski – “The Witcher (large collection)”, Number of pages: 571

5. CD PROJECT RED, GOG COM. "The Witcher 3: Wild Hunt."


Related information.


If you find an error, please select a piece of text and press Ctrl+Enter.