The Hibernate data cache strategy

(a) the hibernate data cache strategy

     The cache is a temporary container database data in memory, it contains a copy in memory database table data, located between the database and the data access layer. System for the query operation quite frequently (forums, news release), caching mechanism is particularly important.

    ORM in the data read, first found in the cache, the performance overhead of database calls.

ORM data cache should include the following aspects:
1)Transaction level cache    2)Application Level 3 cache) cache

Specific for Hibernate, the two level cache strategy, the process description:
(1)When the query conditions, always sends a select * from table_name where... The SQL statement to query the database. So, once for all data objects.
(2) All the data object is obtained according to the ID into the second level cache.
(3) When the Hibernate according to the ID data access object, first check from the Session a cache; not to check, if the configuration of the level two cache, then from the two level cache lookup; not to check, and then query the database, according to the results of ID into the cache.
(4) Delete, update, increase the time data, and update the cache.

1 level cache (session level) - transaction level database cache

     1)According to the primary key ID when loading data. Session.load (), Session.iterate () method

      2)Delayed loading

       Session maintains a set of data objects, including the data object of the Session selection, operation. This is known as the Session internal cache, is the first level of the fastest Hibernate cache, its behavior belongs to the Hibernate, does not require configuration (no way to configure: -).

       The internal cache under normal circumstances by the hibernate automatic maintenance, but also artificial intervention:
                  1) Session.evict (): Removal of a specific object from the internal cache
                   2)Session.clear(): Empty the internal cache

2 level two cache (SessionFactory level) - application level caching

       The level two cache shared by all instances of session SessionFactory.

3 third party cache implementation

      EHCache, OSCahe

Hibernate batch query memory overflow problem caused by

      Batch query is not suitable for the persistence layer technology available to do, such as CMP or hibernate, IBatis might be.

      Because every time the call () method, the session will be included in the internal cache. The internal cache is different from the two level cache, we can specify the maximum capacity at the level two cache configuration.

The solution:

1)In the batch case, close the Hibernate cache, if you close the Hibernate cache, and then use JDBC directly without distinction.

2) Every once in a while to clear the Session internal cache

     Session realize the asynchronous write-behind, batch processing allows Hibernate to explicitly write operation. Here, I give Hibernate to realize batch insert method: first, we set up a reasonable JDBC batch size, hibernate.jdbc.batch_size 20. Then the Session is flush in a certain interval (clear) and().

Session session = sessionFactory.openSession();
Transaction tx = session.beginTransaction();
for ( int i=0; i<100000; i++ ) {
Customer customer = new Customer(.....);;
if ( i % 20 == 0 ) {
//Flush insert data and release memory:
session.flush(); session.clear(); }

     In order to optimize performance, can perform bulk operations. In traditional JDBC programming, batch mode of operation is as follows, a number of SQL operating batch submission:

PrepareStatement  ps=conn.prepareStatement("insert into users(name) values(?)"); 
for(int i=0;i<100000;i++){
     ps.setString(1, "user"+i);
int[] counts=ps.executeBatch

In Hibernate, you can set the hibernate.jdbc.batch_size parameter to specify the number of SQL each presentation.

Hibernate2 and hibernate3 data bulk delete mechanism analysis


        Transaction tx=session.beginTransaction();
        session.delete("from users");

     To observe the log output:

select ... from users
Hibernate:delete from users where id=?
Hibernate:delete from users where id=?
Hibernate:delete from users where id=?

       The hibernate2 version will first query all records in accordance with the conditions from the database, and then delete the record cycle. If the record is too large, is bound to cause memory overflow and removal efficiency. ORM: why would you do that? Because the ORM to automatically maintain the memory state, must know what users are operating on which data. The solution to the problem:

1)Memory consumption

       Bulk delete before the first query from the database all records in accordance with the conditions, if the data is too large, will cause the OutOfMemoryError.

        You can use Session.Iterate or Query.iterate method one by one to obtain records, and then perform the delete operation. In addition, the hibernate2.16 version provides data traversal based on cursor:

Transaction tx=session.beginTransaction();

String hql="from users";
Query query=session.createQrery(hql);
ScrollableResults sr=query.scroll();
  TUser user=(TUser)sr.get(0);


2)Efficiency of cycling deleted

     Because hibernate in batch delete operation, need to repeatedly calls delete SQL, there is a performance problem. We still can be solved by adjusting the hibernate.jdbc.batch_size parameters.


    The bulk delete/update operation into the hibernate3 HQL, is by an independent SQL statements to complete the data bulk operations.

Transaction tx=session.beginTransaction();

String hql="delete TUser";
Query query=session.createQrery(hql);
int count=query.executeUpdate();


To observe the log output:

Hibernate:delete from TUser

(two) the ibatis data cache

    The relative Hibernate encapsulation rigorous ORM realization (because of data object operations to achieve a tight package, can guarantee its role within the scope of the cache synchronization, and ibatis is realized, the semi closed package so the cache operation is difficult to be completely automatic synchronization).     The cache mechanism of ibatis use must be especially careful. Especially the flushOnExecute settings (see "relevant content ibatis configuration in the section"), the need to consider all possible causes the actual data and cache data does not match the operation. Like other Statement the module of data update, update the other modules of the data, even the third party system to update the data. Otherwise, appear dirty data will cause great hidden danger for the normal operation of the system. If not entirely sure data update operation scope, avoid blindly use Cache.
1.iBatis cache settings
Sqlmap-config.xml in<sqlMapConfig>Add



    lazyLoadingEnabled="true" />

Maps.xml in<sqlMap>Add

<cacheModel   id="userCache"   type="LRU"   readonly="true"   serialize="false">

       <flushInterval hours="24"/>

      <flushOnExecute statement="insertTest"/>

      <property name="size" value="1000" />


As you can see, Cache has the following several important attributes: readOnly, serialize, type

     The value of readOnly is the data object in a cache is read-only. This read-only does not mean that a data object
Once into the cache cannot modify the data. But when the data objects change, such as data on
To change a property like, the data object will be abolished from the cache, the next need to be rebuilt from data
Library to read data, construct a new data object.

If the need for a global data cache, serialize CacheModel attribute must be set to true. Otherwise the data cache only for the current Session (can be simply understood as the current thread) effectively, the local cache on the overall system performance improvement Co.

Cache Type:
    Similar to hibernate, ibatis to achieve through the plug-in buffer interface, provides the realization mechanism of various Cache options:
2. LRU

MEMORY type Cache and WeakReference
        MEMORY type with Cache, in fact is through the Java object reference. In ibatis, the class
Com.ibatis.db.sqlmap.cache.memory.MemoryCacheController, MemoryCacheController,
Use a HashMap to store the data object in the current need to cache.

LRU type Cache
        When the Cache reaches the maximum capacity of the preset time, ibatis will be in accordance with the "least use" principle will be the least frequently used
Remove objects from the buffer. Configurable parameters have:
flushInterval: Specifies how long to clear the cache, specify all content every 24 hours to empty the cache of the.

FIFO type Cache
FIFO cache, first in the Cache data will be the first to abolish.


(three) open source data caching strategy OSCache

The problem can be solved:

1)Short time needed for processing the content based data in information system is not changed, but in a relatively long time, it may be dynamically increase or decrease.

2)Statistics is a periodic work, may be half a month, a month or longer time will need to update a, however, statistical statements usually graphics or generate PDF, word, Excel and other file formats, generate these graphics content, file typically need to consume system resources, a lot of, cause a great burden to the system operation.

        OSCache is to realize the assembly cache technology application layer Web a J2EE framework provided by the OpenSymphony organization in. OSCache support on the part of the page content or the response content on the page level caching, programmers can according to the different needs of different environment, different levels of cache. You can use the memory, disk space, and at the same time, the use of memory and hard disk or other resources of its own (need to provide their own adapter) as buffer.

Use the steps:

1 download, unzip the OSCache

Please go to the OSCache home page to download the latest version of Oscache, the download is the latest stable version of OSCache 2.

Download the. Extract the Zip file to c:\oscache (later chapters to use%OSCache_Home% to represent this directory) directory

2 to create a new web application

3 major components of%OSCache_Home%\oscache.jar into the WEB-INF\lib directory

4. commons-logging.jar, Commons-collections.jar processing

OSCache Jakarta components by Commons Logging to handle the log information, so it needs the support of commons-logging.jar, please put a%OSCache_Home%\lib\core\commons-logging.jar into the classpath (usually means the file into the WEB-INF\lib directory)
If you are using JDK1.3, please put a%OSCache_Home%\lib\core\commons-collections.jar into the classpath, if you use JDK1.4 or above, you do not need to.
5 the, oscache.tld in the WEB-INF\class directory

%OSCache_Home%\ contains the OSCache characteristic value of operation setting information
%OSCache_Home%\oscache.tld contains the definition content provided by the OSCache tag library
6 to modify the web.xml file

Add the following content in the web.xml file, add to OSCache provides taglib support:


Cache label 7 of the most simple

Using the default keyword to identify the cache content, the default timeout is 3600 seconds

//The contents of my JSP code

8 cache a single file

         A CacheFilter is used to realize the page level caching is provided in the OSCache components, mainly used for caching some dynamic pages in web applications, especially those who need to generate PDF format file / statements, such as page picture files, not only reduces the database interaction, reduce server pressure, but also to reduce the performance of web the server's consumption has a significant effect.

Modify the web.xml, add the following content, determined for caching /testContent.jsp pages.

<!- caching of /testContent.jsp page content>

Posted by Adelaide at December 18, 2013 - 2:36 PM