Saturday, 2 September 2017

Spring 4 Interview Questions with Answers

1. What are the main features introduced in spring 4?
Ans: Spring 4 has introduced many new features. Some of them are as follows.

1. @RestController annotation has been introduced for easiness to develop spring rest web service.
2. AsyncRestTemplate has been added to develop rest web service.
3. Java 8 and hibernate 4.3 have been supported.
4. Different time zone in spring MVC has been supported.
5. Now spring supports websocket protocol.
6. Spring messaging supports STOMP protocol.
7. Spring security JUnit test module has been added with @WithMockUser and @WithUserDetails annotations.

2. What is the use of @RestController annotation in spring 4?

Ans: Spring 4 has introduced @RestController annotation that has replaced @Controller and @ResponseBody. In spring 4 rest web service development, our service methods need not to use @ResponseBody.
@RestController = @Controller + @ResponseBody

@RestController
@RequestMapping("/data")
public class PersonController {}  

3. What is the role of AsyncRestTemplate and ListenableFuture in spring 4?
Ans: AsyncRestTemplate can return the URL output asynchronously. ListenableFuture is the return type which itself will return ResponseEntity.

ListenableFuture<ResponseEntity<String>> future = 
              asycTemp.exchange(url, method, requestEntity, responseType);
ResponseEntity<String> entity = future.get();  

4. What is the role of AsyncClientHttpRequestFactory and AsyncClientHttpRequest in spring 4?

Ans: AsyncClientHttpRequestFactory returns the instance of AsyncClientHttpRequest that represents client side asynchronous HTTP request. We use it as follows.



ListenableFuture<ClientHttpResponse> future = asyncClientHttpRequest.executeAsync();
 

5. How to use WebSocket in spring 4?

Ans: 1. Java configuration class implements AbstractWebSocketMessageBrokerConfigurer and we need to override its methods that are configureMessageBroker() and registerStompEndpoints()

2. Java configuration class should be annotated with @EnableWebSocketMessageBroker with @Configuration
3. Spring controller class uses @SendTo annotation with @MessageMapping at method level to declare result URL. 
4. To work with WebSocket, other protocol and JS library such as SockJS and STOMP Protocol are required.

6. What is the role of @CacheConfig in spring 4?

Ans: @CacheConfig is used at class level. It is used to set common cache related settings. All the methods annotated with @Cacheable override the settings of @CacheConfig.


@Service
@CacheConfig(cacheNames="mycacheone")
public class Student {}  

7. How to handle @Async exception in spring 4?

Ans: Spring 4 provides AsyncUncaughtExceptionHandler that caches exception thrown by the method annotated with @Async. We create a class implementing AsyncUncaughtExceptionHandler.


public class MyAsyncUncaughtExceptionHandler implements AsyncUncaughtExceptionHandler {
    @Override
    public void handleUncaughtException(Throwable ex, Method method, Object... params) {}
}  

8. What is the role of @WithMockUser and @WithUserDetails annotation in spring 4 security JUnit test

Ans: @WithMockUser annotation allows mock user at server side in spring security JUnit testing. There are usernameand roles attributes in @WithMockUser annotation. We use it as follows.

@Test 
@WithMockUser(username = "ram", roles={"ADMIN"})
public class SpringSecurityTest {} 

@WithUserDetails annotation provides custom UserDetailsService in spring security JUnit testing and we can use it as follows.

@Test @WithUserDetails("ram") public void testFour() { userService.methodFour(); }

Interview Questions: Transaction Management

Qns-1: Describe Global and Local transactions in Spring.
Ans: Global transactions help to work with multiple transactional resources like relational database and message queue. Global transactions are managed through JTA and JNDI.
Local transactions are resource-specific like JDBC connection. Local Transactions can work with multiple transactional resources.
Qns-2: What is the role of TransactionDefinition interface?
Ans: a. Isolation
b. Propagation
c. Timeout
d. Read-only status
Qns-3: How can we roll back a declarative transaction?
Ans: We can use rollback-for and no-rollback-for attributes with transactional definition.
Qns-4: How many types of isolation are there?
Ans: a. ISOLATION_DEFAULT: default isolation.
b. ISOLATION_READ_COMMITTED: dirty reads are prevented, non-repeatable and phantom reads are allowed.
c. ISOLATION_READ_UNCOMMITTED : dirty reads are allowed, no-repeatable and phantom reads are allowed.
d. ISOLATION_REPEATABLE_READ: dirty reads and non-repeatable reads are prevented but phantom reads are allowed.
e. ISOLATION_SERIALIZABLE : dirty , non- repeatable reads and phantom reads are prevented.
Qns-5: How many types of Propagation are there?
Ans: Find the Propagation type. a. PROPAGATION_MANDATORY : supports current transaction and throws exception if no transaction available.
b. PROPAGATION_NESTED : runs with nested transaction
c. PROPAGATION_NEVER : does not run with current transaction and throws exception if current transaction exits.
d. PROPAGATION_NOT_SUPPORTED : runs non -transactionaly and does not support current transaction.
e. PROPAGATION_REQUIRED : runs with current transaction and create one if does not exist.
f. PROPAGATION_REQUIRES_NEW : creates new transaction and suspends if exits any.
g. PROPAGATION_SUPPORTS: runs current transaction and runs non -transactionaly

3-Tier architectuire

In my opinion, you have to distinguish between the MVC pattern and the 3-tier architecture. To sum up:
3-tier architecture:
  • data: persisted data;
  • service: logical part of the application;
  • presentation: hmi, webservice...
The MVC pattern takes place in the presentation tier of the above architecture (for a webapp):
  • data: ...;
  • service: ...;
  • presentation:
    • controller: intercepts the HTTP request and returns the HTTP response;
    • model: stores data to be displayed/treated;
    • view: organises output/display.
Life cycle of a typical HTTP request:
  1. The user sends the HTTP request;
  2. The controller intercepts it;
  3. The controller calls the appropriate service;
  4. The service calls the appropriate dao, which returns some persisted data (for example);
  5. The service treats the data, and returns data to the controller;
  6. The controller stores the data in the appropriate model and calls the appropriate view;
  7. The view get instantiated with the model's data, and get returned as the HTTP response.

DTO vs DAO vs Service

Data Transfer Object

DTO is an object that carries data between processes. When you're working with a remote interface, each call it is expensive. As a result you need to reduce the number of calls. The solution is to create a Data Transfer Object that can hold all the data for the call. It needs to be serializable to go across the connection. Usually an assembler is used on the server side to transfer data between the DTO and any domain objects. It's often little more than a bunch of fields and the getters and setters for them.

Data Access Object

Data Access Object abstracts and encapsulates all access to the data source. The DAOmanages the connection with the data source to obtain and store data.
The DAO implements the access mechanism required to work with the data source. The data source could be a persistent store like an RDBMS, or a business service accessed via REST or SOAP.
The DAO abstracts the underlying data access implementation for the Service objects to enable transparent access to the data source. The Service also delegates data load and store operations to the DAO.

Service

Service objects are doing the work that the application needs to do for the domain you're working with. It involves calculations based on inputs and stored data, validation of any data that comes in from the presentation, and figuring out exactly what data source logic to dispatch, depending on commands received from the presentation.
Service Layer defines an application's boundary and its set of available operations from the perspective of interfacing client layers. It encapsulates the application's business logic, controlling transactions and coordinating responses in the implementation of its operations.

Spring Propagation in Transactions

@Transactional(propagation=Propagation.REQUIRED)
If not specified, the default propagational behavior is REQUIRED. 
Other options are REQUIRES_NEW, MANDATORY, SUPPORTS, NOT_SUPPORTED, NEVER, and NESTED.
REQUIRED
  • Indicates that the target method can not run without an active tx. If a tx has already been started before the invocation of this method, then it will continue in the same tx or a new tx would begin soon as this method is called.    
REQUIRES_NEW
  • Indicates that a new tx has to start every time the target method is called. If already a tx is going on, it will be suspended before starting a new one.
MANDATORY
  • Indicates that the target method requires an active tx to be running. If a tx is not going on, it will fail by throwing an exception.
SUPPORTS
  • Indicates that the target method can execute irrespective of a tx. If a tx is running, it will participate in the same tx. If executed without a tx it will still execute if no errors.
  • Methods which fetch data are the best candidates for this option.
NOT_SUPPORTED
  • Indicates that the target method doesn’t require the transaction context to be propagated.
  • Mostly those methods which run in a transaction but perform in-memory operations are the best candidates for this option.
NEVER
  • Indicates that the target method will raise an exception if executed in a transactional process.
  • This option is mostly not used in projects.
@Transactional (rollbackFor=Exception.class)
  • Default is rollbackFor=RunTimeException.class
  • In Spring, all API classes throw RuntimeException, which means if any method fails, the container will always rollback the ongoing transaction.
  • The problem is only with checked exceptions. So this option can be used to declaratively rollback a transaction if Checked Exception occurs.
@Transactional (noRollbackFor=IllegalStateException.class)
  • Indicates that a rollback should not be issued if the target method raises this exception.
Now the last but most important step in transaction management is the placement of @Transactional annotation. Most of the times, there is a confusion where should the annotation be placed: at Service layer or DAO layer?
@Transactional: Service or DAO Layer?
  • The Service is the best place for putting @Transactional, service layer should hold the detail-level use case behavior for a user interaction that would logically go in a transaction.
  • There are a lot of CRUD applications that don't have any significant business logic for them having a service layer that just passes data through between the controllers and data access objects is not useful. In these cases we can put transaction annotation on Dao.
  • So in practice you can put them in either place, it's up to you.
  • Also if you put @Transactional in DAO layer and if your DAO layer is getting resused by different services then it will be difficult to put it on DAO layer as different services may have different requirements.
  • If your service layer is retrieving objects using Hibernate and let's say you have lazy initializations in your domain object definition then you need to have a transaction open in service layer else you will face LazyInitializationException thrown by the ORM.
  • Consider another example where your Service layer may call two different DAO methods to perform DB operations. If your first DAO operation failed  then other two may be still passed and you will end up inconsistent DB state. Annotating Service layer can save you from such situations.
Ref: https://dzone.com/articles/spring-transaction-management

Friday, 1 September 2017

ISOLATION LEVEL

A relational database strong consistency model is based on ACID transaction properties

In computer scienceACID (AtomicityConsistencyIsolationDurability) is a set of properties of database transactions intended to guarantee validity even in the event of errors, power failures, etc. In the context of databases, a sequence of database operations that satisfies the ACID properties and, thus, can be perceived as single logical operation on the data, is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction.

Atomicity requires that each transaction be "all or nothing": if one part of the transaction fails, then the entire transaction fails, and the database state is left unchanged. An atomic system must guarantee atomicity in each and every situation, including power failures, errors and crashes.

The consistency property ensures that any transaction will bring the database from one valid state to another. Any data written to the database must be valid according to all defined rules, including constraintscascadestriggers, and any combination thereof. 

The isolation property ensures that the concurrent execution of transactions results in a system state that would be obtained if transactions were executed sequentially, i.e., one after the other. Providing isolation is the main goal of concurrency control. Depending on the concurrency control method (i.e., if it uses strict - as opposed to relaxed - serializability), the effects of an incomplete transaction might not even be visible to another transaction.

The durability property ensures that once a transaction has been committed, it will remain so, even in the event of power loss, crashes, or errors. In a relational database, for instance, once a group of SQL statements execute, the results need to be stored permanently (even if the database crashes immediately thereafter). To defend against power loss, transactions (or their effects) must be recorded in a non-volatile memory.

Isolation and consistency

In a relational database system, atomicity and durability are strict properties, while consistency and isolation are more or less configurable. We cannot even separate consistency from isolation as these two properties are always related.
The lower the isolation level, the less consistent the system will get. From the least to the most consistent, there are four isolation levels:
  • READ UNCOMMITTED
  • READ COMMITTED (protecting against dirty reads)
  • REPEATABLE READ (protecting against dirty and non-repeatable reads)
  • SERIALIZABLE (protecting against dirty, non-repeatable reads and phantom reads)

Read committed is an isolation level that guarantees that any data read was committed at the moment is read. It simply restricts the reader from seeing any intermediate, uncommitted, 'dirty' read. IT makes no promise whatsoever that if the transaction re-issues the read, will find the Same data, data is free to change after it was read.
Repeatable read is a higher isolation level, that in addition to the guarantees of the read committed level, it also guarantees that any data read cannot change, if the transaction reads the same data again, it will find the previously read data in place, unchanged, and available to read.
The next isolation level, Serializable, makes an even stronger guarantee: in addition to everything repeatable read guarantees, it also guarantees that no new data can be seen by a subsequent read.

–> DIRTY READS: Reading uncommitted modifications are call Dirty Reads. Values in the data can be changed and rows can appear or disappear in the data set before the end of the transaction, thus getting you incorrect or wrong data.
This happens at READ UNCOMMITTED transaction isolation level, the lowest level. Here transactions running do not issue SHARED locks to prevent other transactions from modifying data read by the current transaction. This also do not prevent from reading rows that have been modified but not yet committed by other transactions.
To prevent Dirty Reads, READ COMMITTED or SNAPSHOT isolation level should be used.
 
–> PHANTOM READS: Data getting changed in current transaction by other transactions is called Phantom Reads. New rows can be added by other transactions, so you get different number of rows by firing same query in current transaction.
In REPEATABLE READ isolation levels Shared locks are acquired. This prevents data modification when other transaction is reading the rows and also prevents data read when other transaction are modifying the rows. But this does not stop INSERT operation which can add records to a table getting modified or read on another transaction. This leads to PHANTOM reads.
PHANTOM reads can be prevented by using SERIALIZABLE isolation level, the highest level. This level acquires RANGE locks thus preventing READ, Modification and INSERT operation on other transaction until the first transaction gets completed.


Dirty Read:-
Dirty read occurs when one transaction is changing the record, and the other transaction can read this record before the first transaction has been committed or rolled back. This is known as a dirty read scenario because there is always the possibility that the first transaction may rollback the change, resulting in the second transaction having read an invalid data.
Dirty Read Example:-
Transaction A begins.
UPDATE EMPLOYEE SET SALARY = 10000 WHERE EMP_ID= ‘123’;
Transaction B begins.
SELECT * FROM EMPLOYEE;
(Transaction B sees data which is updated by transaction A. But, those updates have not yet been committed.)
Non-Repeatable Read:-
Non Repeatable Reads happen when in a same transaction same query yields to a different result. This occurs when one transaction repeatedly retrieves the data, while a difference transactions alters the underlying data. This causes the different or non-repeatable results to be read by the first transaction.
Non-Repeatable Example:-
Transaction A begins.
SELECT * FROM EMPLOYEE WHERE EMP_ID= ‘123’;
Transaction B begins.
UPDATE EMPLOYEE SET SALARY = 20000 WHERE EMP_ID= ‘123’;
(Transaction B updates rows viewed by the transaction A before transaction A commits.) If Transaction A issues the same SELECT statement, the results will be different.
Phantom Read:-
Phantom read occurs where in a transaction execute same query more than once, and the second transaction result set includes rows that were not visible in the first result set. This is caused by another transaction inserting new rows between the execution of the two queries. This is similar to a non-repeatable read, except that the number of rows is changed either by insertion or by deletion.
Phantom Read Example:-
Transaction A begins.
SELECT * FROM EMPLOYEE WHERE SALARY > 10000 ;
Transaction B begins.
INSERT INTO EMPLOYEE (EMP_ID, FIRST_NAME, DEPT_ID, SALARY) VALUES (‘111′, ‘Jamie’, 10, 35000);
Transaction B inserts a row that would satisfy the query in Transaction A if it were issued again.

Monday, 12 June 2017

Difference between First and Second Level Cache in Hibernate

Hibernate Provides Cache at different Levels:
1. first level cache -- session (enabled by default)
2. Second level cache - session factory  (need to configure)

This session level cache greatly improves the performance of Java application by minimizing database roundtrips and executing less number of queries. For example, if an object is modified several times within the same transaction, then Hibernate will only generate one SQL UPDATE statement at the end of the transaction, containing all the modification.

Second level cache can be configured using ehCache or OSCache

Here is a sample configuration to configure Second level cache with EhCache:

<prop key="hibernate.cache.use_second_level_cache">true</prop>
<prop key="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</prop>

Don't forget to include hibernate-ehcache.jar into your classpath.

Difference between First and Second Level Cache in Hibernate

You can also use JPA Annoation @Cacheable to specify which entity is cacheable. and Hibernate annoation @Cache to specify caching startegy e.g. CacheConcurrencyStrategies like READ_WRITE or READ_ONLY to tell Hibernate how the second level cache should behave.

Wednesday, 11 January 2017

GPG Cheat Sheet

Quick'n easy gpg cheatsheet
If you found this page, hopefully it's what you were looking for. It's just a brief explanation of some of the command line functionality from gnu privacy guard (gpg).


Filenames are italicized (loosely, some aren't, sorry), so if you see something italicized, think "put my filename there."
I've used User Name as being the name associated with the key. Sorry that isn't very imaginative. I *think* gpg is pretty wide in it's user assignments, ie. the name for my private key is Charles Lockhart, but I can reference that by just putting in Lockhart. That doesn't make any sense, sorry.
to create a key:
gpg --gen-key
generally you can select the defaults.
to export a public key into file public.key:
gpg --export -a "User Name" > public.key

This will create a file called public.key with the ascii representation of the public key for User Name. This is a variation on:
gpg --export
which by itself is basically going to print out a bunch of crap to your screen. I recommend against doing this.
gpg --export -a "User Name"
prints out the public key for User Name to the command line, which is only semi-useful
to export a private key:
gpg --export-secret-key -a "User Name" > private.key

This will create a file called private.key with the ascii representation of the private key for User Name.
It's pretty much like exporting a public key, but you have to override some default protections. There's a note (*) at the bottom explaining why you may want to do this.
to import a public key:
gpg --import public.key

This adds the public key in the file "public.key" to your public key ring.
to import a private key:
NOTE: I've been informed that the manpage indicates that "this is an obsolete option and is not used anywhere." So this may no longer work.
gpg --allow-secret-key-import --import private.key

This adds the private key in the file "private.key" to your private key ring. There's a note (*) at the bottom explaining why you may want to do this.
to delete a public key (from your public key ring):
gpg --delete-key "User Name"
This removes the public key from your public key ring.
NOTE! If there is a private key on your private key ring associated with this public key, you will get an error! You must delete your private key for this key pair from your private key ring first.
to delete an private key (a key on your private key ring):
gpg --delete-secret-key "User Name"
This deletes the secret key from your secret key ring.
To list the keys in your public key ring:
gpg --list-keys

To list the keys in your secret key ring:
gpg --list-secret-keys

To generate a short list of numbers that you can use via an alternative method to verify a public key, use:
gpg --fingerprint > fingerprint
This creates the file fingerprint with your fingerprint info.
To encrypt data, use:
gpg -e -u "Sender User Name" -r "Receiver User Name" somefile

There are some useful options here, such as -u to specify the secret key to be used, and -r to specify the public key of the recipient.
As an example: gpg -e -u "Charles Lockhart" -r "A Friend" mydata.tar
This should create a file called "mydata.tar.gpg" that contains the encrypted data. I think you specify the senders username so that the recipient can verify that the contents are from that person (using the fingerprint?).
NOTE!: mydata.tar is not removed, you end up with two files, so if you want to have only the encrypted file in existance, you probably have to delete mydata.tar yourself.
An interesting side note, I encrypted the preemptive kernel patch, a file of 55,247 bytes, and ended up with an encrypted file of 15,276 bytes.

To decrypt data, use:
gpg -d mydata.tar.gpg
If you have multiple secret keys, it'll choose the correct one, or output an error if the correct one doesn't exist. You'll be prompted to enter your passphrase. Afterwards there will exist the file "mydata.tar", and the encrypted "original," mydata.tar.gpg.
NOTE: when I originally wrote this cheat sheet, that's how it worked on my system, however it looks now like "gpg -d mydata.tar.gpg" dumps the file contents to standard output. The working alternative (worked on my system, anyway) would be to use "gpg -o outputfile -d encryptedfile.gpg", or using mydata.tar.gpg as an example, I'd run "gpg -o mydata.tar -d mydata.tar.gpg". Alternatively you could run something like "gpg -d mydata.tar.gpg > mydata.tar" and just push the output into a file. Seemed to work either way.
Ok, so what if you're a paranoid bastard and want to encrypt some of your own files, so nobody can break into your computer and get them? Simply encrypt them using yourself as the recipient.
I haven't used the commands:
gpg --edit-key
gpg --gen-revoke

  • --gen-revoke creates a revocation certificate, which when distributed to people and keyservers tells them that your key is no longer valid, see http://www.gnupg.org/gph/en/manual/r721.html
  • --edit-key allows you do do an assortment of key tasks, see http://www.gnupg.org/gph/en/manual/r899.html


Sharing Secret Keys

NOTE!: the following use cases indicate why the secret-key import/export commands exist, or at least a couple ideas of what you could do with them. HOWEVER, there's some logistics required for sharing that secret-key. How do you get it from one computer to another? I guess encrypting it and sending it by email would probably be ok, but I wouldn't send it unencrypted with email, that'd be DANGEROUS.
Use Case *.1 : Mentioned above were the commands for exporting and importing secret keys, and I want to explain one reason of why maybe you'd want to do this. Basically if you want one key-pair for all of your computers (assuming you have multiple computers), then this allows you export that key-pair from the original computer and import it to your other computers.
Use Case *.2 : Mentioned above were the commands for exporting and importing secret keys, and I want to explain one reason of why maybe you'd want to do this. Basically, if you belonged to a group, and wanted to create a single key-pair for that group, one person would create the key-pair, then export the public and private keys, give them to the other members of the group, and they would all import that key-pair. Then a member of the group or someone outside could use the group public key, encrypt the message and/or data, and send it to members of the group, and all of them would be able to access the message and/or data. Basically you could create a simplified system where only one public key was needed to send encrypted stuffs to muliple recipients.