”;
Hazelcast clients are the lightweight clients to Hazelcast members. Hazelcast members are responsible to store data and the partitions. They act like the server in the traditional client-server model.
Hazelcast clients are created only for accessing data stored with Hazelcast members of the cluster. They are not responsible to store data and do not take any ownership to store data.
The clients have their own life cycle and do not affect the Hazelcast member instances.
Let”s first create Server.java and run it.
import java.util.Map; import com.hazelcast.core.Hazelcast; import com.hazelcast.core.HazelcastInstance; public class Server { public static void main(String... args){ //initialize hazelcast server/instance HazelcastInstance hazelcast = Hazelcast.newHazelcastInstance(); //create a simple map Map<String, String> vehicleOwners = hazelcast.getMap("vehicleOwnerMap"); // add key-value to map vehicleOwners.put("John", "Honda-9235"); // do not shutdown, let the server run //hazelcast.shutdown(); } }
Now, run the above class.
java -cp .targetdemo-0.0.1-SNAPSHOT.jar com.example.demo.Server
For setting up a client, we also need to add client jar.
<dependency> <groupId>com.hazelcast</groupId> <artifactId>hazelcast-client</artifactId> <version>3.12.12</version> </dependency>
Let”s now create Client.java. Note that similar to Hazelcast members, clients can also be configured programmatically or via XML configuration (i.e., via -Dhazelcast.client.config or hazelcast-client.xml).
Example
Let’s use the default configuration which means our client would be able to connect to local instances.
import java.util.Map; import com.hazelcast.client.HazelcastClient; import com.hazelcast.core.HazelcastInstance; public class Client { public static void main(String... args){ //initialize hazelcast client HazelcastInstance hzClient = HazelcastClient.newHazelcastClient(); //read from map Map<String, String> vehicleOwners = hzClient.getMap("vehicleOwnerMap"); System.out.println(vehicleOwners.get("John")); System.out.println("Member of cluster: " + hzClient.getCluster().getMembers()); // perform shutdown hzClient.getLifecycleService().shutdown(); } }
Now, run the above class.
java -cp .targetdemo-0.0.1-SNAPSHOT.jar com.example.demo.Client
Output
It will produce the following output −
Honda-9235 Member of cluster: [Member [localhost]:5701 - a47ec375-3105-42cd-96c7-fc5eb382e1b0]
As seen from the output −
-
The cluster only contains 1 member which is from Server.java.
-
The client is able to access the map which is stored inside the server.
Load Balancing
Hazelcast Client supports load balancing using various algorithms. Load balancing ensures that the load is shared across members and no single member of the cluster is overloaded. The default load balancing mechanism is set to round-robin. The same can be changed by using the loadBalancer tag in the config.
We can specify the type of load balancer using the load-balancer tag in the configuration. Here is a sample for choosing a strategy that randomly picks up a node.
<hazelcast-client xmlns="http://www.hazelcast.com/schema/client-config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.hazelcast.com/schema/client-config http://www.hazelcast.com/schema/client-config/hazelcastclient-config-4.2.xsd"> <load-balancer type="random"/> </hazelcast-client>
Failover
In a distributed environment, members can fail arbitrarily. For supporting failover, it is recommended that address to multiple members is provided. If the client gets access to any one member, that is sufficient for it to get addressed to other members. The parameters addressList can be specified in the client configuration.
For example, if we use the following configuration −
<hazelcast-client xmlns="http://www.hazelcast.com/schema/client-config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.hazelcast.com/schema/client-config http://www.hazelcast.com/schema/client-config/hazelcastclient-config-4.2.xsd"> <address-list>machine1, machine2</address-list> </hazelcast-client>
Even if, say, machine1 goes down, clients can use machine2 to get access to other members of the cluster.
”;