Wednesday, September 6, 2023

Introduction to the ONDC Platform

 Introduction to ONDC

Open Network for Digital Commerce (ONDC) is the new and revolutionary way for digital commerce in India. It provides an open specification and aims to provide more dynamic participation of buyers and sellers, promotes fair competition, aims to increase user choices and fosters further innovation. By providing a platform and open specification, there can be myriad of innovative applications that can be built of top of the ONDC Platform. ONDC itself is built on top of the Beckn Protocol and the IndiaStack.

 

Read my Earlier Article on Introduction to the Beckn Protocol.

 

What really is ONDC?

It has to be noted that ONDC is not an application, software or a technology – It is a set of specifications or an open network and associated platforms can be built out of or based on this specification. ONDC is none of the following:


ý Application or Platform
ý Central Intermediary
ý Medium to Digitize Biz
ý Government Regulator
 

At the same time, ONDC is an open network and a set of open specifications to enable and ease digital commerce.

þ Open Network, Set of Open Specifications
þ Eliminates Need for Central Intermediary
þ Enabler for E-Commerce and Innovation
þ It’s a Market & Community Led Initiative

 

How ONDC Works?

At the core of its advantages, ONDC goes beyond the current platform centric digital commerce model where the buyer and the seller have to use the same single platform to be digitally visible and do a business transaction. With the help of the underlying protocols in ONDC, ONDC provides an open specification for standardizing cataloguing, inventory management, order management and order fulfilment. This provide multiple options for businesses to be discoverable over the network and conduct business. It will also encourage the use of digital means for business that are not yet on any digital commerce networks. The maximum advantage is bound to be for the small and medium businesses.

 


 

As you can observe from the above, sellers from multiple platform can be discoverable via ONDC. Sellers, usually also called Providers in ONDC, techno functionally user the Beckn Provider Platform (BPP). The sellers will use the BPP to acquire orders and for order fulfilment via ONDC. Beckn Application Platform (BAP) allows consumers to discover sellers, place orders and allows them to make payment.




The underlying communication in ONDC happens via the Beckn Gateways, that becomes the backbone for ONDC. Beckn itself is built on top of CORD Network (Central Office Re-architected as a Datacenter). Beckn stands Blockchain Enabled Ecommerce for Knowledge Networks. It is the driving factor underlying ONDC. Beckn Gateways (BG) lie between BAP and BPP to provide all routing infrastructure or the transaction layer. BGs are extremely lean and stateless routing servers. The purpose of this infrastructure to optimize discovery of BPPs by the BAPs. Business wise, They should ensure a fair chance for all active BPPs to be discovered and to sell their goods.

 





 

Buyer and Seller Cycle in ONDC

Below, I am also providing a self-explanatory way to explain the Buyer-Cycle and the Seller-Cycle of ONDC. It is more like a Business Process Flow Diagram (Instance) of how a buyer will discover, search, purchase and pay using ONDC. It also shows how a sellers can acquire consumers.

 


Below is the depiction of the merchant cycle, for a given instance of purchase by a buyer. It shows how the order is fulfilled via logistics providers. In a way, it shows how fairness and competitiveness can be promoted among the buyers and the other members in the chain like logistics providers.

 



 Process for Joining of Buyer and of Seller App

Before I end this article, I would also like to share the process flow for joining of buyer app to ONDC and for joining of sellers to ONDC. Please note this is not related to buyers registering on the buyer app or the sellers on the seller app.

 


 


Advantages of ONDC

In the end, we take a look at all the advantages that ONDC brings to the world of digital commerce, from buyer or seller viewpoint and also for the other participants in the transactional or fulfilment cycle.

 

·        Government Backed Initiative (IndiaStack)

·        Increased Choice and a Fair Competition

·        Democratisation and Decentralization

·        Lower Prices / Competitiveness in Pricing

·        Efficiency in Logistics and Shipments

·        Ensured Data Privacy and Confidentiality

·        Support for Micro, Small, Rural Business

·        Easier Rollout of Discounts & Promotions

 

 

Current Businesses (Buyers/Sellers) on ONDC (Sep 2023)

 

Buyer Apps – PayTM, Magicpin, SpiceMoney, CraftsVilla

Seller Apps – Meesho, Snapdeal

[ Logistics ] – Ekart, Dunzo, Delhivery

 

Conclusion

ONDC does present a promising opportunity and with a government backing is all the more trustworthy. It has started off in multiple cities in India. Soon, It will be launched in more cities and towns of India. It is to be seen whether this will take on as the consistent platform of choice for big players or remain merely an advantage for micro, small and medium businesses. In any case, It will continue to foster innovation and there may be multiple platforms that are inspired from ONDC. This may or may not be directly related to digital commerce.

 

References/Credits To

ONDC-Wikipedia Page
Beckn for Developers
Government of India (PIB)
PayTM Blog on ONDC
Article - Business Std.


Saturday, August 19, 2023

Introduction to the Beckn Protocol

Blockchain Enabled Commerce Network (beckn), as the name suggests is a blockchain enabled de-centralized protocol. It provides a specification that enables creation of decentralized networks. It also provides APIs, Data Models, Reference Architectures, Transaction Mechanisms and Global Standards for the purposes of adoption by Digital Platforms. 

Usually, a intermediary allows the providers and consumers to come together and interact by providing a centralized mechanism. Beckn provides the providers and consumers to discover, identify and transact with each other without the need for a real central intermediary. It can be thought of as a set of rules that are mutually agreed upon by several platforms that allows their users to discover, order, fulfilment and post-fulfilment activities between each other. This protocol is not specific to any sector, technology or industry. The taxonomy of any application of any industry can be represented using the standard data model of beckn.
 

The idea was born in Nov 2018 and the first formal transaction actually happened in Sep 2020. The open network went live in Jul 2021. The first open ecommerce network platform, ONDC went live in Dec 2021. Other protocols based on beckn, like DSEP – Decentralized Skills and Education Protocol and DHP – Decentralized Health Protocol were launched in Dec 2021.
 

The official website of beckn is https://becknprotocol.io/
 

The beckn ecosystem had consumers and providers. A consumer is the one who will ideally want to place orders for products and services published on a platform that user the beckn protocol API calls to try and fulfil this request. There might be multiple such platform known as beckn application platforms (BAPs). To fulfil these, beckn provides beckn provider platforms or BPPs. BPPs use beckn protocol for purposes of discovery, ordering, fulfilment and post-fulfilment.




beckn ecosystem has multiple consumer platforms and provider platforms, which talk to each other via beckn protocol. This communication is facilitated via a open registry. This registry acts as the public key infrastructure (PKI) that stores public keys, endpoints of all participants of the network. The sender digitally signs the messages using their private keys and the receiver looks up the public key of the sender on the registry to digitally verify the signature. Hence, this registry is the trust infrastructure for this inter-platform communication.

The below diagram shows the beckn ecosystem architecture. The first 2 layers are the ones implemented by an organization who want to provide the server side applications for the consumers and the providers. The last 2 layers are the actual specification and support for an entity to get certified to join the network. Every entity that wants to be part of the network, must be part of one or more of the layers listed in the left column. 



In the very middle of the diagram is the actual infrastructure and security that transforms beckn compliant platforms into actual live transacting entities in an open network.
 

Some of the actual applications or networks formed out of beckn include the Kochi Open Mobility Network and the Open Network for Digital Commerce (ONDC). beckn holds great promise across industries in terms of cost reduction, ease of joining, decentralized transactions, easy interoperability and an ability to build more innovative applications.


Wednesday, July 5, 2023

Apache Kafka - Java Producer & Consumer Example (Kafka v3.4 on Windows 10)

Introduction

This is Part-2 of a 2-Part articles series on Running Apache Kafka Server, Configuring Kafka Topic, and Creating a Core Java Based Kafka Consumer, as also a Core Java Based Kafka Producer. All this is demonstrated step-by-step example. All of this is for Java v8.0, Apache Kafka v3.4 on Windows 10. Part-1 focussed on Kafka Consumer and Kafka Producer from the Command Line. This article focuses on the Core Java counterparts. It is important that the reader reads the Part-1 of the article, completes the example so that the reader has a basic foundation in Kafka. This article also provides the Maven Dependencies required to create, build and run Apache Kafka Consumers and Producers. Finally, It shows the code sample to run the Consumer and Producer Thread. The code sample is straightforward for an Intermediate+ Java Developer, hence the explanations is omitted for the code.


Pre-Requisites

1. Install Java//JRE (v8.0 is used in this Example)
2. Install Apache Kafka 3.4.0 from the Given Link
3. Set Java Classpath> Set JAVA_HOME Correctly
4. UnZIP/UnTAR Apache Kafka Downloaded in (2)
5. Use a Text Editor like [Notepad++] for Editing
6. Eclipse IDE (Or Others) to Create, Run & Test


Before You Begin, Read my Article #1 in this Series at : https://rebrand.ly/skp-ts-kafka-v3-win

Maven Project (Eclipse) and Dependencies

Create a Simple JAR archetype Maven Project in Eclipse (Or IDE of Your Choice). Add the following dependencies in your pom.xml. Make Sure that your Compiler Version is Java 8.0.
 <dependencies>  
         <!-- This is the Core Library Containing the Classes We will Use -->  
         <dependency>  
             <groupId>org.apache.kafka</groupId>  
             <artifactId>kafka-clients</artifactId>  
             <version>3.2.1</version>  
         </dependency>  
         <!-- The Kafka Client Libraries use the slf4j Logger, So we Need to Add   
             This as a Dependency so that the Required Classes are Present in Our   
             Classpath for the Kafka Client Libraries to Use -->  
         <dependency>  
             <groupId>org.slf4j</groupId>  
             <artifactId>slf4j-api</artifactId>  
             <version>1.7.36</version>  
         </dependency>  
 </dependencies>  


Developing the Java Producer

1. Make Sure you know the Topic Name
2. Find out Your Bootstrap - Server Port
3. Refer Javadoc for KafkaProducer Obj. 
4. Also, for Producer & ProducerRecord


Code for Java Producer (Tested on Kafka v3.4 on Windows 10)
 /**   
  *    Author @sumith.puri (Addl. Ref: https://www.sohamkamani.com/java/kafka/)  
  */   
 package com.kafka.poc.producer;  
   
 import java.util.Properties;  
   
 import org.apache.kafka.clients.producer.KafkaProducer;  
 import org.apache.kafka.clients.producer.Producer;  
 import org.apache.kafka.clients.producer.ProducerRecord;  
   
 public class KafkaPoCProducer implements Runnable {  
   
     private static final String TOPIC = "test";  
     private static final String BOOTSTRAP_SERVERS = "localhost:9092";  
   
     @Override  
     public void run() {  
   
         produce();  
     }  
   
     private void produce() {  
   
         // Create Configuration Options for our Producer and Initialize a New Producer  
         Properties props = new Properties();  
         props.put("bootstrap.servers", BOOTSTRAP_SERVERS);  
   
         // We Configure the Serializer to Describe the Format in which we Want To  
         // Produce Data into our Kafka Cluster  
         props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");  
         props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");  
   
         // Since we Need to Close our Producer, We can use the try-with-resources  
         // Statement to create a New Producer  
         try (Producer<String, String> producer = new KafkaProducer<>(props)) {  
   
             // Here, We Run an Infinite Loop to Send a Message to the Cluster Every Second  
             for (int i = 0;; i++) {  
                 String key = Integer.toString(i);  
                 String message = "Watson, Please Come Over Here " + Integer.toString(i);  
   
                 producer.send(new ProducerRecord<String, String>(TOPIC, key, message));  
   
                 // Log a Confirmation Once The Message is Written  
                 System.out.println("Sent Message " + key);  
                 try {  
                     // Sleep for a Second  
                     Thread.sleep(1000);  
                 } catch (Exception e) {  
                     break;  
                 }  
             }  
         } catch (Exception e) {  
             System.out.println("Could not Start Producer Due To: " + e);  
         }  
     }  
 }  
   

Developing the Java Consumer

1. Make Sure you know the Topic Name
2. Find out Your Bootstrap - Server Port
3. Refer Javadoc > KafkaConsumer Obj.
4. Also, for Consumer, ConsumerRecord
5. Refer AutoCommit/Acknowledgement


Code for Java Consumer (Tested on Kafka v3.4 on Windows 10)
 /**   
  *    Author @sumith.puri (Addl. Ref: https://www.sohamkamani.com/java/kafka/)  
  */   
 package com.kafka.poc.consumer;  
   
 import java.time.Duration;  
 import java.util.Arrays;  
 import java.util.Properties;  
   
 import org.apache.kafka.clients.consumer.ConsumerRecord;  
 import org.apache.kafka.clients.consumer.ConsumerRecords;  
 import org.apache.kafka.clients.consumer.KafkaConsumer;  
   
 public class KafkaPoCConsumer implements Runnable {  
   
     private static final String TOPIC = "test";  
     private static final String BOOTSTRAP_SERVERS = "localhost:9092";  
   
     @Override  
     public void run() {  
   
         consume();  
     }  
   
     private void consume() {  
   
         // Create Configuration Options for our Consumer  
         Properties props = new Properties();  
   
         props.setProperty("bootstrap.servers", BOOTSTRAP_SERVERS);  
         // The Group ID is a Unique Identified for Each Consumer Group  
         props.setProperty("group.id", "my-group-id");  
   
         // Since our Producer uses a String Serializer, We need to use the Corresponding  
         // Deserializer  
         props.setProperty("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");  
         props.setProperty("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");  
   
         // Every Time We Consume a Message from kafka, We Need to "commit", That Is,  
         // Acknowledge Receipts of the Messages.... We Can Set up an Auto-Commit at  
         // Regular intervals, so that this is Taken Care of in the Background  
         props.setProperty("enable.auto.commit", "true");  
         props.setProperty("auto.commit.interval.ms", "1000");  
   
         // Since We Need to Close our Consumer, We can Use the try-with-resources  
         // Statement to Create It  
         try (KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props)) {  
   
             // Subscribe this Consumer to the Same Topic that we Wrote Messages to Earlier  
             consumer.subscribe(Arrays.asList(TOPIC));  
   
             // Run an Infinite Loop where we Consume and Print New Messages to the Topic  
             while (true) {  
                   
                 // The consumer.poll Method Checks and Waits..For Any New Messages To Arrive For  
                 // The Subscribed Topic in case there are No Messages for the Duration Specified  
                 // In the Argument (1000 ms In this Case), It returns an Empty List  
                 ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(1000));  
                 for (ConsumerRecord<String, String> record : records) {  
                     System.out.printf("Received Message: %s\n", record.value());  
                 }  
             }  
         }  
     }  
   
 }  
   


Create the Java Application to Demo Kafka Producer & Consumer
 /**   
  *    Author @sumith.puri (Addl. Ref: https://www.sohamkamani.com/java/kafka/)  
  */   
 package com.kafka.poc.app;  
   
 import com.kafka.poc.consumer.KafkaPoCConsumer;  
 import com.kafka.poc.producer.KafkaPoCProducer;  
   
 public class KafkaPoCApp {  
   
     public static void main(String[] args) {  
   
         Thread cThread = new Thread(new KafkaPoCConsumer());  
         cThread.start();  
   
         Thread pThread = new Thread(new KafkaPoCProducer());  
         pThread.start();  
   
     }  
 }  
   

Run the Above Application in your IDE or Command-Line.


Typical Output from Running the Kafka Producer Consumer PoC



Tuesday, July 4, 2023

Starting Apache Kafka on Windows 10 (Kafka v3.4)

Introduction

This is Part-1 of a 2-Part articles series on Running Apache Kafka Server, Configuring Kafka Topic, and Creating a Kafka Consumer, as also a Kafka Producer. All this is demonstrated step-by-step example that works from the Command Line. All of this is for Apache Kafka v3.4 on Windows 10.


Pre-Requisites
1. Install Java//JRE (v8.0 is used in this Example)
2. Install Apache Kafka 3.4.0 from the Given Link
3. Set Java Classpath> Set JAVA_HOME Correctly
4. UnZIP/UnTAR Apache Kafka Downloaded in (2)
5. Use a Text Editor like [Notepad++] for Editing



Version 3.4.0
Apache Kafka Version 3.4.0 was Released on Feb 7, 2023, This article specifically is for the Kafka Version (2.13-3.4.0). 

For Purposes of this Article, I use {KAFKA_HOME} as the windows folder where Kafka was installed.


Step-By-Step Guide

0. Configure Zookeeper (Data Directory)
Create a folder to hold Zookeeper Data by modifying the file zookeper.properties (File is Located under {KAFKA_HOME}/config/). Create a Folder named zk-data (or as per your wish). In my case, I created this under {KAFKA_HOME}. You may then modify your properties file as show in the image below.  Modify your dataDir to point to the newly created folder. 



0. Configure Kafka (Kafka Logs)
For the purpose of kafka logs, you can create a folder with the name kafka-logsIn my case, I created this under {KAFKA_HOME}. You may then modify your properties file as show in the image below. The property to be modified is  log.dirs in server.properties that should now point to the newly created folder.



1. Starting Zookeeper
First, Zookeeper has to be started using the following command.
zookeeper-server-start.bat ..\..\config\zookeeper.properties



2. Starting Kafka Server
Next, we will start the Kafka Server using the following command.

D:\kafka_2.13-3.4.0\bin\windows>kafka-server-start.bat ..\..\config\server.properties



3. Creating a Test Topic
Create a Kafka Topic to test out the Kafka Installation using the following command.

D:\kafka_2.13-3.4.0\bin\windows>kafka-topics.bat --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test



The above is an updated way to create topics in Kafka. In earlier versions of Kafka (Kafka v2), the suggested way to create topics was directly via Zookeeper. From v3, It has changed to create topics via Brokers.

(Cited from StackOverflow)

For version 2.* you have to create the topic using zookeper with the default port 2181 as a parameter.

For the version 3.* the zookeeper is not any more a parameter, you should use --bootstrap-server using localhost or the IP adresse of the server and the default port 9092.

Documentation

 

4. Create Kafka Producer
kafka-console-producer.bat --broker-list localhost:9092 --topic test



5. Create Kafka Consumer

kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning


------

Next in this series of Articles will be the demonstration of a Core Java Kafka Producer and Consumer followed by an article on Spring Boot based Kafka Integration. 

Friday, January 27, 2023

My GitHub Repo #25 : Tokyo

Code Samples for [Algos & DS, OOPs, Lambdas]
MIT License, Copyright (c) 2018-19, Sumith Kumar Puri
https://github.com/sumithpuri


[Java] Problem : Changes in Usernames (HackerRank)
[Java] Problem : Active Traders (HackerRank)
[Java] [ FP ] : Functional Programming & Lambdas (TechGig)
[Java] [OOPs] : Object Oriented Programming (TechGig)
[Java] Problem : Monkeys in the Garden (TechGig)
[Java] Problem : AutoComplete Using Trie Data Structure
[Java] Problem : Longest Palindrome in String









Project Codename

Tokyo

Blog Post URL

http://www.techilashots.blog/2015/09/introduction-to-complex-event.html

Blog Short URL

Package Prefix

me.sumithpuri.github.tokyo

GitHub URL

https://github.com/sumithpuri/skp-winter-code-nights-tokyo

Contact E-Mail

code@sumithpuri.xyz

Contact Number

+91 9591497974 (WhatsApp, Viber, Telegram)

Historical

 Started this Movement of 1000s of Lines of Java / J2EE* Code to GitHub

 Was a Senior Software Architect (Java/J2EE) in Manila*, 2018 (At Start) 

 Named this Initial Code Journey as [ Manila Code Marathon - 2018 ]

 Code Is Non-Proprietary and Non-Copyright from my Work Experience.

 Was Back to Bangalore, Named as [ Bangalore Code Nights - 2019. ]

 Added More Code under [ -20 Days of Code in Benglauru- ] in 2020

 Celebration of Java/Java EE Code as Java Turned 25 in the Year ~ 2020!

  

My GitHub Repo #24 : Tripura

Code Samples for [Spring - ORM]
MIT License, Copyright (c) 2018-19, Sumith Kumar Puri
https://github.com/sumithpuri


[Completed Brainbench Spring 2.x Certification - India Top 10 - 3.08/5]
[Modified and New Samples Based out of the Book - 'Spring In Action' ]









Project Codename

Tripura

Certificate URL        

https://rebrand.ly/skp-bb-spring-certificate

Certification/Topic

Brainbench Spring 2.5 Certification (ORM)

Package Prefix

me.sumithpuri.github.tripura

GitHub URL

https://github.com/sumithpuri/skp-mini-marathon-tripura

Contact E-Mail

code@sumithpuri.xyz

Contact Number

+91 9591497974 (WhatsApp, Viber, Telegram)

Historical

 Started this Movement of 1000s of Lines of Java / J2EE* Code to GitHub

 Was a Senior Software Architect (Java/J2EE) in Manila*, 2018 (At Start) 

 Named this Initial Code Journey as [ Manila Code Marathon - 2018 ]

 Code Is Non-Proprietary and Non-Copyright from my Work Experience.

 Was Back to Bangalore, Named as [ Bangalore Code Nights - 2019. ]

 Added More Code under [ -20 Days of Code in Benglauru- ] in 2020

 Celebration of Java/Java EE Code as Java Turned 25 in the Year ~ 2020!

  

My GitHub Repo #23 : Sikkim

Code Samples for [Spring - Web Services]
MIT License, Copyright (c) 2018-19, Sumith Kumar Puri
https://github.com/sumithpuri


[Completed Brainbench Spring 2.x Certification - India Top 10 - 3.08/5]
[Modified and New Samples Based out of the Book - 'Spring In Action' ]









Project Codename

Sikkim

Certificate URL        

https://rebrand.ly/skp-bb-spring-certificate

Certification/Topic

Brainbench Spring 2.5 Certification (Web Services)

Package Prefix

me.sumithpuri.github.sikkim

GitHub URL

https://github.com/sumithpuri/skp-mini-marathon-sikkim

Contact E-Mail

code@sumithpuri.xyz

Contact Number

+91 9591497974 (WhatsApp, Viber, Telegram)

Historical

 Started this Movement of 1000s of Lines of Java / J2EE* Code to GitHub

 Was a Senior Software Architect (Java/J2EE) in Manila*, 2018 (At Start) 

 Named this Initial Code Journey as [ Manila Code Marathon - 2018 ]

 Code Is Non-Proprietary and Non-Copyright from my Work Experience.

 Was Back to Bangalore, Named as [ Bangalore Code Nights - 2019. ]

 Added More Code under [ -20 Days of Code in Benglauru- ] in 2020

 Celebration of Java/Java EE Code as Java Turned 25 in the Year ~ 2020!