Descripción del problema
Si usas Hibernate seguramente estarás cansado de ver trazas con los prepared statement y sus símbolos de interrogación "?". O bien ver las consultas SQL y no poder ver los resultados de la misma sin tener que echar una linea de código. Te parecería interesante que se mostrasen los resultados en el Log:Pues veras que fácil es la solución.
Solución
log4jdbc-log4j2 es un driver JDBC que actúa como proxy. Este permite trazar todas las llamadas JDBC y las sentencias SQL usando log4J y SLF4J.
Lo primero es agregar las dependencias al proyecto de las librerias de log4jdbc. La versión de la librería dependerá de la versión de JDK que estés utilizando:
- log4jdbc-log4j2-jdbc3.jar para JDBC 3 que es soportado en JDK 1.5.
- log4jdbc-log4j2-jdbc4.jar for JDBC 4 que es soportado en JDK 1.6.
- log4jdbc-log4j2-jdbc4.1.jar for JDBC 4.1 support in JDK 1.7
En Maven habría que agregar las dependencias de la siguiente forma:
1 2 3 4 5 | < dependency > < groupid >org.bgee.log4jdbc-log4j2</ groupid > < artifactid >log4jdbc-log4j2-jdbcXX</ artifactid > < version >1.16</ version > </ dependency > |
Hay que agregar el fichero log4jdbc.log4j2.properties a nivel de classpath. Este fichero sirve para indicarle al driver que tipo de logger usar (Log4j2, SLF4J)
1 2 3 4 5 6 7 8 9 | log4jdbc.spylogdelegator.name=net.sf.log4jdbc.log.slf4j.Slf4jSpyLogDelegator #Optional parameters #log4jdbc.debug.stack.prefix=^ #log4jdbc.sqltiming.warn.threshold= #log4jdbc.dump.sql.select=false #log4jdbc.dump.sql.insert=false #log4jdbc.dump.sql.update=false #log4jdbc.dump.sql.delete=false |
También se puede agregar como propiedades del sistema en el momento de ejecutar una clase:
-Dlog4jdbc.spylogdelegator.name=net.sf.log4jdbc.log.slf4j.Slf4jSpyLogDelegator
El siguiente paso es la configuración de los Loggers. En función de los datos que nos interesa monitorizar, activamos los loggers requeridos. Por ejemplo vamos a configurar log4j.xml, para que imprima un conjunto de resultados en formato de tabla y también el tiempo necesario para ejecutar cada consulta.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | < log4j:configuration xmlns:log4j = "http:jakarta.apache.orglog4j" > < appender class = "org.apache.log4j.ConsoleAppender" name = "console-log4jdbc" > < param name = "Target" value = "System.out" > < layout class = "org.apache.log4j.PatternLayout" > < param name = "ConversionPattern" value = "%m%n" > </ layout > </ appender > < logger additivity = "false" name = "jdbc.sqltiming" > < level value = "info" > < appender-ref ref = "console-log4jdbc" > </ appender-ref ></ level ></ logger > <!-- log4jdbc option log the jdbc results as a table --> < logger additivity = "false" name = "jdbc.resultsettable" > < level value = "info" > < appender-ref ref = "console-log4jdbc" > </ appender-ref ></ level ></ logger > </ log4j:configuration > |
Después de configurar los loggers, ejecute el código y puedes observar las trazas.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | create table Item (id bigint generated by default as identity, price double not null , product varchar (255), quantity integer not null , order_id bigint , primary key (id)) {executed in 10 msec} create table T_ORDER (id bigint generated by default as identity, customer varchar (255), primary key (id)) {executed in 1 msec} alter table Item add constraint FK22EF339F325255 foreign key (order_id) references T_ORDER {executed in 11 msec} insert into T_ORDER (id, customer) values ( null , NULL ) {executed in 1 msec} insert into Item (id, order_id, price, product, quantity) values ( null , NULL , 0.0, NULL , 0) {executed in 0 msec} batching 1 statements: 0: update Item set ORDER_ID=1 where id=1 {executed in 2 msec} insert into T_ORDER (id, customer) values ( null , NULL ) {executed in 0 msec} insert into Item (id, order_id, price, product, quantity) values ( null , NULL , 0.0, NULL , 0) {executed in 0 msec} batching 1 statements: 0: update Item set ORDER_ID=2 where id=2 {executed in 0 msec} select order0_.id as id1_0_, order0_.customer as customer1_0_ from T_ORDER order0_ where order0_.id=2 {executed in 0 msec} | ---------|---------| |ID |CUSTOMER | | ---------|---------| |[unread] | null | | ---------|---------| select items0_.ORDER_ID as ORDER5_1_2_, items0_.id as id2_, items0_.id as id0_1_, items0_.order_id as order5_0_1_, items0_.price as price0_1_, items0_.product as product0_1_, items0_.quantity as quantity0_1_, order1_.id as id1_0_, order1_.customer as customer1_0_ from Item items0_ left outer join T_ORDER order1_ on items0_.order_id=order1_.id where items0_.ORDER_ID=2 {executed in 0 msec} | ---------|---|---|---------|------|--------|---------|---|---------| |ORDER_ID |ID |ID |ORDER_ID |PRICE |PRODUCT |QUANTITY |ID |CUSTOMER | | ---------|---|---|---------|------|--------|---------|---|---------| |2 |2 |2 |2 |0.0 | null |0 |2 |[unread] | | ---------|---|---|---------|------|--------|---------|---|---------| insert into T_ORDER (id, customer) values ( null , NULL ) {executed in 0 msec} insert into Item (id, order_id, price, product, quantity) values ( null , NULL , 0.0, 'foo' , 0) {executed in 0 msec} batching 1 statements: 0: update Item set ORDER_ID=3 where id=3 {executed in 0 msec} select order0_.id as id1_, order0_.customer as customer1_ from T_ORDER order0_ inner join Item items1_ on order0_.id=items1_.ORDER_ID where items1_.product= 'foo' limit 2 {executed in 6 msec} | ---|---------| |ID |CUSTOMER | | ---|---------| |3 | null | | ---|---------| select items0_.ORDER_ID as ORDER5_1_2_, items0_.id as id2_, items0_.id as id0_1_, items0_.order_id as order5_0_1_, items0_.price as price0_1_, items0_.product as product0_1_, items0_.quantity as quantity0_1_, order1_.id as id1_0_, order1_.customer as customer1_0_ from Item items0_ left outer join T_ORDER order1_ on items0_.order_id=order1_.id where items0_.ORDER_ID=3 {executed in 0 msec} | ---------|---|---|---------|------|--------|---------|---|---------| |ORDER_ID |ID |ID |ORDER_ID |PRICE |PRODUCT |QUANTITY |ID |CUSTOMER | | ---------|---|---|---------|------|--------|---------|---|---------| |3 |3 |3 |3 |0.0 |foo |0 |3 |[unread] | | ---------|---|---|---------|------|--------|---------|---|---------| |
Si utilizas logback.xml, puedes agregar los loggers de esta forma
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | < appender class = "ch.qos.logback.core.rolling.RollingFileAppender" name = "ACCESS" > < rollingpolicy class = "ch.qos.logback.core.rolling.TimeBasedRollingPolicy" > <!-- On a day to day rollback daily --> < filenamepattern >logs/access-%d{yyyy-MM-dd}.log</ filenamepattern > <!-- Max log history -- 10 days --> < maxhistory >10</ maxhistory > </ rollingpolicy > < encoder > < pattern >%d{HH:mm:ss.SSS} [%thread] %-5level %logger - %msg%n</ pattern > </ encoder > <!-- The log level --> < filter class = "ch.qos.logback.classic.filter.ThresholdFilter" > < level >INFO</ level > < onmatch >ACCEPT</ onmatch > < onmismatch >DENY </ onmismatch > </ filter > <!--Maximum log file size --> < triggeringpolicy class = "ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy" > < maxfilesize >10MB</ maxfilesize > </ triggeringpolicy > </ appender > < logger level = "INFO" name = "net.sf.log4jdbc" > < logger level = "OFF" name = "jdbc.sqltiming" > < logger level = "OFF" name = "jdbc.resultsettable" > < logger level = "DEBUG" name = "jdbc.sqlonly" > < logger level = "OFF" name = "jdbc.audit" > < logger level = "OFF" name = "jdbc.resultset" > < logger level = "OFF" name = "jdbc.connection" > < root level = "INFO" > < appender-ref ref = "ACCESS" > </ appender-ref ></ root > </ logger ></ logger ></ logger ></ logger ></ logger ></ logger ></ logger > |
No hay comentarios:
Publicar un comentario