Memory management in Oracle JDBC 12c
Posted: Mon Jan 27, 2025 9:33 am
In 12c there is only one buffer, of type byte[], in which everything is stored. And now comes the big difference: how much memory is actually allocated does not depend on the data definition, but on the actual amount of data! Let's take the 255 x VARCHAR2(4000) table as an example again. This time we also have to decide how it should actually be filled. So let's imagine that 170 of the 255 columns are NOT NULL and have an average length of 30 chars.
Now the memory requirement is a little different:
So far so good - in theory it bulgaria telegram screening looks great. But how does it work in practice? That's exactly what I wanted to know, so I put it to the test.
The test
For the test I used the popular load testing tool Swingbench [3] . Swingbench currently contains – apart from the benchmarking framework itself – four benchmarks, but it is also possible to design your own tests, with your own tables and your own Java code.
This gave me the opportunity to tailor the test to the question. The test table consisted of 200 VARCHAR2(4000) columns, on average 160 bytes long. For the 11g driver, this meant that ~1.5 M had to be allocated per record, whereas for the 12c driver it was ~30k. The query was chosen so that there was enough room to vary the fetch size: the queries returned an average of 1000 records.
Independent variables were the driver version and the fetch size. For both drivers, ojdbc6.jar was used to keep the conditions as constant as possible. The dependent variables were:
the number of completed transactions as returned by the Swingbench framework
the maximum and average heap size, as well as the allocated memory for char[] and byte[] objects, measured by Java Flight Recorder (note: license required for productive use)
the type of load on the database server, measured by SQL trace. The trace or the tkprof files generated from it were mainly used for control ("sanity check"), but also provide an excellent insight into what is happening from the perspective of the DB server.
The heap size was limited to a maximum of 1 G. The tests were carried out with 4 simultaneous users, and the test run time was 20 minutes per combination of driver version and fetch size. As random tests showed, this time was completely sufficient to obtain stable results.
Now the memory requirement is a little different:
So far so good - in theory it bulgaria telegram screening looks great. But how does it work in practice? That's exactly what I wanted to know, so I put it to the test.
The test
For the test I used the popular load testing tool Swingbench [3] . Swingbench currently contains – apart from the benchmarking framework itself – four benchmarks, but it is also possible to design your own tests, with your own tables and your own Java code.
This gave me the opportunity to tailor the test to the question. The test table consisted of 200 VARCHAR2(4000) columns, on average 160 bytes long. For the 11g driver, this meant that ~1.5 M had to be allocated per record, whereas for the 12c driver it was ~30k. The query was chosen so that there was enough room to vary the fetch size: the queries returned an average of 1000 records.
Independent variables were the driver version and the fetch size. For both drivers, ojdbc6.jar was used to keep the conditions as constant as possible. The dependent variables were:
the number of completed transactions as returned by the Swingbench framework
the maximum and average heap size, as well as the allocated memory for char[] and byte[] objects, measured by Java Flight Recorder (note: license required for productive use)
the type of load on the database server, measured by SQL trace. The trace or the tkprof files generated from it were mainly used for control ("sanity check"), but also provide an excellent insight into what is happening from the perspective of the DB server.
The heap size was limited to a maximum of 1 G. The tests were carried out with 4 simultaneous users, and the test run time was 20 minutes per combination of driver version and fetch size. As random tests showed, this time was completely sufficient to obtain stable results.