Skip to content
This repository has been archived by the owner on Nov 15, 2019. It is now read-only.

English

Jon Kalb edited this page Jan 29, 2017 · 55 revisions

rest_rpc: An easy to use RPC framework

First official release

rest_rpc is a project created by the C++ open source community ([purecpp.org] (http://purecpp.org)). It's developed using modern C++. The first version was released after several iterations and refactorings. rest_rpc is an easy to use, flexible, high-performance, cross-platform RPC framework. Compare the features of rest_rpc, to the RPC frameworks developed by other large companies.

Features

  • RPC called just like local function calls
  • Easy to use: developers only need to focus on business logic
  • Flexible: the serialization method can be freely customized, default support for JSON,msgpack
  • Supports synchronous and asynchronous calls

These features are covered in the article of The standards how to evaluate an easy to use RPC. rest_rpc complies with these standards fully. There is no doubt that it's a really excellent RPC framework, and has huge development and growth prospects.

The business logic processing of traditional network library is generally divided into the following five steps:

  1. Receive nework data;
  2. Unpack network data;
  3. Call business logic;
  4. Pack result;
  5. Send data.

There's only one step if you use rest_rpc.

  1. Call business logic (rest_rpc will do other tasks for you).

rest_rpc provides one-stop service. The other four steps are omitted altogether. Developers only need to focus on business logic, which cuts out the hassle!

The most important feature of rest_rpc is ease of use. Developers can invoke the RPC service interface just like the local call, focusing on their own business logic, without understanding details of the framework or the network. rest_rpc is also very flexible, developers can choose the serialization with support for custom serialization.

Usage

Let's look at a simple example to demonstrate how to use rest_rpc. The server provides an RPC service interface of int add(int a, int b). The Client gets the result by RPC.

  • Server side code:

      #include <rest_rpc/server.hpp>
      using namespace timax::rpc;
    
      int add(int a, int b)
      {
      	return a + b;
      }
    
      int main()
      {
      	// Select serialization, support json, msgpack, or other customized methods
      	using codec_type = msgpack_codec;
    
      	// create server
      	auto sp = std::make_shared<server<codec_type>>(port, thread_num);
      	
      	// Register the business logic processing functions. A function can be one of a
      	// normal function, a function object, a lambda, a std::function, or a class
      	// member function. You can use anything, no restriction.
      	sp->register_handler("add", add); 
    
      	sp->start();
      	std::getchar();		// replace with your server main loop
      	sp->stop();
      }		
    
  • Synchronous client side code

      #include <rest_rpc/client.hpp>
      using namespace timax::rpc;
    
      // Defines the call definition, check the grammar at compile time
      TIMAX_DEFINE_PROTOCOL(add, int(int, int));
    
      int main()
      {
      	// Define synchronous client and msgpack serialization
      	sync_client<msgpack_codec> client; 
      	
      	// C++ client only need to set the endpoint,
      	// no need to to create a connection manually
      	auto endpoint = get_tcp_endpoint("127.0.0.1", port); 
    
      	// C++ rpc calls are type safe, the call function will check the parameter types
      	// at compile time, and supports secure implicit conversion of parameters
      	auto result = client.call(endpoint, add, 1, 2); 
      	assert(result == 3);
    
      	// Secure implicit type conversion, will convert float to int
      	result = client.call(endpoint, add, 1.0, 2.0f);
      	assert(result == 3);
      }
    

The RPC program is complete. The code for both server and client are just a few lines,no more than 10. Developers only need to focus on business logic, no need to be concerned about network or framework details. The framework is very easy to use, without any restrictions.

  • Asynchronous client side code

A more complex example

This example will demonstrate the RPC interface which contains binary data. Some RPC framework need to do binary conversion, such as BASE64, if they encounter binary data. Rest_rpc supports the original binary data, without any conversion.

  • Server side code

      #include <rest_rpc/server.hpp>
      using namespace timax::rpc;
      
      struct test
      {
      	void compose(int i, const std::string& str, blob_t bl, double d)
      	{
      		std::cout << i << " " << str << " " << bl.data() << " " << d << std::endl;
      	}
      };
    
      int main()
      {
      	// Select serialization, support json, msgpack
      	// users can create custom serializations
      	using codec_type = msgpack_codec;
    
      	// create server
      	auto sp = std::make_shared<server<codec_type>>(port, thread_num);
    
      	test t;
    
      	// use timax::bind to bind class memeber function
      	sp->register_handler("compose", timax::bind(&test::compose, &t)); 
    
      	sp->start();
      	// ...
      }	
    
  • Client side code

      #include <rest_rpc/client.hpp>
      using namespace timax::rpc;
    
      // Defines the call definition, check the grammar at compile time
      TIMAX_DEFINE_PROTOCOL(compose, void(int, const std::string&, blob_t, double));
    
      int main()
      {
      	// Define synchronous client type and msgpack serialization
      	sync_client<msgpack_codec> client;
      	
      	auto endpoint = get_tcp_endpoint("127.0.0.1", port); 
    
      	// so easy to use, rest_rpc does most of the work for you
      	client.call(endpoint, compose, 1, "test", blob_t("data", 4), 2.5);
      }
    

Compile

rest_rpc requires a C++14 compiler, such as VS2015 on Windows or gcc 5.0+ on linux and Boost version 1.55 or later.

Notice

The client needs to do exception handling because the RPC call may fail. There are several possibilities, such as: the network connection between server and client is broken; server doesn't provide this RPC service; or the server throws an internal exception. In short, the rest_rpc framework throws error codes and error messages as exceptions. So it is better to catch exceptions outside the call.

	try
	{
		auto result = client.call(endpoint, client::add, 1, 2);
		assert(result == 3);
	}
	catch (timax::rpc::exception const& e)
	{
		std::cout << e.get_error_message() << std::endl;
	}

In addition, the server executes the business function in the main thread by default. If the user needs to perform a very time-consuming operation, rest_rpc provides an interface to perform business functions asynchronously.

	// .....
	namespace your_project
	{
		void some_task_takes_a_lot_of_time(double, int)
		{
			using namespace std::chrono_literals;
			std::this_thread::sleep_for(5s);
		}
	}
		
	int main()
	{
		// ....

		// Business functions do not block the main thread if they are registered as
		// asynchronous functions
		server->async_register_handler("time_consuming", 
			your_project::some_task_takes_a_lot_of_time);

		// ...
	} 

Asynchronous client

Synchronous client will block the calling thread, this reduces performance when simplifying the logic. rest_rpc also implements an asynchronous client which is easy to use.

  • Asynchronous client example

      #include <rest_rpc/rpc.hpp>
    
      namespace client
      {
      	TIMAX_DEFINE_PROTOCOL(add, int(int, int));
      }
    
      int main()
      {
      	using namespace std::chorno_literals;
    
      	// Export an asynchronous client that using msgpack serialization
      	using async_client_t = timax::rpc::async_client<timax::rpc::msgpack_codec>;
    
      	// server's IP address and port
      	auto endpoint = get_tcp_endpoint("127.0.0.1", 9000);
    
      	// asynchronous client instance
      	async_client_t async_client;
    
      	// Call RPC, use when_ok to register a success callback, use when_error to
      	// register a failure callback, and use timeout to set timeout time
      	// (here it is set to 10 seconds)
      	async_client.call(endpoint, client::add, 1, 2).on_ok([](auto r)
      	{
      		std::cout << r << std::endl;
      	}).on_error([](auto const& error)
      	{
      		std::cout << error.get_error_message() << std::endl;
      	}).timeout(10s);
    
      	std::getchar();
      	return 0;
      }
    
  • Asynchronous client's synchronous interface

The asynchronous client has both an asynchronous interface and a synchronous interface. Developers can choose when and where to block.

	#include <rest_rpc/rpc.hpp>

	namespace client
	{
		TIMAX_DEFINE_PROTOCOL(add, int(int, int));
	}

	int main()
	{
		// same code as asynchronous client before
		using namespace std::chorno_literals;
		using async_client_t = timax::rpc::async_client<timax::rpc::msgpack_codec>;
		auto endpoint = get_tcp_endpoint("127.0.0.1", 9000);
		async_client_t async_client;

		// RPC call will return a task, similar to std::future
		auto task = client.call(endpoint, client::add, 1, 2);

		// The get function will block the calling function until timeout or the
		// result is returned.
		// Do not run get in multiple threads, because get is not thread safe
		try
		{
			auto result = taks.get();
			// do something with the result ...
		}
		catch(timax::rpc::exception const& error)
		{
			// the error from server, throw as exception to client, equivalent to
			// pure asynchronous interface of when_error
			std::cout << error.get_error_message() << std::endl;
		}
		
		return 0;
	}

Performance Testing

rest_rpc has very high performance. The following test result of adding an RPC service interface is using the asynchronous client. Because RPC is the request-response model, this is a ping-pong test that includes the business logic, data unpacking, business execution, data packing, and sending process.

The above test was run on a 12 core (2.4G Hz) 24-thread server, the CPU utilization was 63% when QPS is 460,000.

Code quality

The below picture is created by a code quality measurement tool. The very readable.

If you only need RPC, then you can stop reading

If you want more, please continue read.

There is something else?

Yes, there are some special features. rest_rpc provides not only RPC functions, but also more interesting features, such as publish-subscribe! That's right, rest_rpc has a pub/sub function. Why does RPC framework provides this pub/sub function? Because RPC and the pub/sub model are similar. RPC can be treated as a special case of the pub/sub model. The following is a pub/sub example.

  • Server side code

      #include <rest_rpc/server.hpp>
      using namespace timax::rpc;
      
      int add(int a, int b)
      {
      	return a + b;
      }
    
      int main()
      {
      	// Select serialization, support json, msgpack
      	// users can create custom serializations
      	using codec_type = msgpack_codec;
    
      	// create server
      	auto sp = std::make_shared<server<codec_type>>(port, thread_num);
    
      	// server provides add topic
      	sp->register_handler("sub_add", &add, [sp](auto conn, auto r) 
      	{
      		sp->pub("sub_add", r);         // broadcast to all subscribers
      	});
    
      	sp->start();
      	// .....
      }
    
  • Pub client side code

Because of pub/sub mode's natural asynchronous properties, we only achieve this interface for the asynchronous client, synchronization client is not yet supported.

	#include <rest_rpc/client.hpp>
	using namespace timax::rpc;

	TIMAX_DEFINE_PROTOCOL(sub_add, int(int, int));

	int main()
	{
		// ......


		// define asynchronous client and msgpack serialization
		async_client<msgpack_codec> client;
		auto endpoint = get_tcp_endpoint("127.0.0.1", 9000);

		client.call(endpoint,sub_add, 1, 2); // pub is essentially an RPC call

		// .....
	}
  • Sub client side code

      #include <rest_rpc/client.hpp>
      using namespace timax::rpc;
    
      TIMAX_DEFINE_PROTOCOL(sub_add, int(int, int));
    
      int main()
      {
      	async_client<msgpack_codec> client;
      	auto endpoint = get_tcp_endpoint("127.0.0.1", 9000);
    
      	client.sub(endpoint, client::sub_add, [](int r)
      	{
      		std::cout << r << std::endl;
      	});
      	// ......
      }
    

Pub/Sub mode is so simple. Compared to other RPC frameworks, rest_rpc provides not only an easy-to-use and flexible RPC interface, but also additional pub/sub capabilities. And pub/sub can be combined with RPC calls at any time, this make the rest_rpc more powerful.