前言
rpc作为一种优秀的分布式通信框架,应用十分广泛。出于学习的目的,利用c++实现基于rpc的远端服务器性能参数实时获取功能。文章主要包括服务端(brpc引入,docker搭建)、客户端(libevent适配baidu_std协议,qt界面)两方面,主要框架如下:
一、服务端
1、brpc
常用的rpc框架有thrift、grpc及brpc等,网上查阅了一番资料后,选择了brpc来学习(深入brpc - 知乎 (zhihu.com)),主要原因是:1、brpc对c/c++更友好;2、帮助文档详细;3、brpc的性能很高,另一方面,brpc的很多结构,如bthread、work steal等,十分值得学习。
a)brpc编译
brpc(github.com/apache/brpc) 的编译可以参考github.com/apache/brpc… 。需要说明的是:1、brpc的依赖库gflsgs、protobuf、leveldb等最好编译成动态库(指定-fPIC,或-DCMAKE_POSITION_INDEPENDENT_CODE=ON),不然编译brpc会提示找不到库文件;make install的路径最好用默认的/usr/lib或/usr/local/lib,这样brpc编译不用改路径就可以找到依赖;2、brpc是基于c++11的,protobuf版本最好是3.6.1的,protobuf版本太高会导致brpc编译失败。
b).proto文件定义
机器运行参数包括CPU核数、磁盘大小、内存大小、空闲CPU、空闲内存占用、占CPU最多的前几个进程等参数。这里将其分为固有参数和动态参数,提供两个服务来实现:
syntax = "proto2";
option cc_generic_services = true;
message SystemDynamicParameterTopOf {
required string procedure_name = 1;
required double procedure_percent = 2;
};
message SystemIntrinsicParameterRequest {
};
message SystemIntrinsicParameterResponse {
required uint32 cpuCores = 1;
required string diskInventory = 2;
required string totalPhysicalMemory = 3;
};
service SystemIntrinsicParameterService {
rpc SystemIntrinsicParameter(SystemIntrinsicParameterRequest) returns (SystemIntrinsicParameterResponse);
};
message SystemDynamicParameterRequest {
required uint32 topOfCount = 1;
};
message SystemDynamicParameterResponse {
required double freeCpuPercent = 1;
required double freeDiskPercent = 2;
required double freeMemoryPercent = 3;
repeated SystemDynamicParameterTopOf topOfCpu = 4;
repeated SystemDynamicParameterTopOf topOfMemory = 5;
};
service SystemDynamicParameterService {
rpc SystemDynamicParameter(SystemDynamicParameterRequest) returns (SystemDynamicParameterResponse);
};
具体rpc server的实现可直接参考brpc/example/echo_c++/server.cpp at master · apache/brpc (github.com) 。有少个服务就调用AddService多少次。
c)利用c++获取机器运行参数
可以通过extern FILE *popen (const char *__command, const char *__modes) __wur; 函数来执行sh脚本,从而获取机器运行状态,示例如下:
#define GET_CPU_CORES "grep -c "model name" /proc/cpuinfo"
#define GET_DISK_INVENTORY "df -h | grep "/dev/vda1" | awk '{print $2}'"
#define GET_TOTAL_PHYSICAL_MEMORY "free -h | grep Mem | awk '{print $2}'"
#define GET_FREE_CPU_PERCENT "top -b -n 1 | grep Cpu | awk '{print $8}' | cut -f 1 -d "%""
#define GET_FREE_DISK_INVENTORY "df -h | grep "/dev/vda1" | awk '{print $4}'"
#define GET_USED_PHYSICAL_MEMORY "free -h | grep Mem | awk '{print $3}'"
#define GET_TOP_OF_CPU(index)
(std::string("top -b -n 1 | sed -n '8,50p' | sort -r -k 9 | head -n ") + std::to_string(index) + std::string(" | awk '{print $9, $12}'")).c_str()
#define GET_TOP_OF_MEMORY(index)
(std::string("top -b -n 1 | sed -n '8,50p' | sort -r -k 10 | head -n ") + std::to_string(index) + std::string(" | awk '{print $10, $12}'")).c_str()
//...
FILE *fstream = NULL;
char buff[1024] = {0};
if (NULL == (fstream = popen(GET_FREE_CPU_PERCENT, "r")))
{
LOG_ERROR("get cpu cores failed: %s", strerror(errno));
return false;
}
fgets(buff, sizeof(buff), fstream);
for (int i = 0; i set_freecpupercent(strtod(buff, NULL));
2、docker搭建
一般情况下,服务端的进程会放在docker环境中运行,以达到自动化部署、隔离外界的效果。1、首先需要通过make install指令将所有依赖的动态库打包到一个目录下;2、编写如下Dockerfile,我的环境测试发现必须只能拷贝到系统目录(/lib64 /lib /usr/lib64等)下,本来想拷贝到新建的docker_brpc_server目录下,但运行一直报错缺少依赖库,具体原因还不清楚;3、执行docker run命令(docker run -d -p 4000:4000 brpc_server:latest),这个时候不会报缺少依赖库的错。
FROM centos:latest
LABEL org.opencontainers.image.authors="_zq_yy_lf_"
# RUN mkdir -p /docker_brpc_server
# COPY . /docker_brpc_server
COPY . /lib64
WORKDIR /lib64 # /docker_brpc_server
EXPOSE 4000
CMD ["./rpc_server", "--port=4000", "--idle_timeout_s=60"]
二、客户端
1、rpc client实现
a)tcp client实现
虽然brpc有很多优点,但目前brpc不支持window(算是brpc的一个缺点),导致需要自己实现一套基于baidu_std协议的rpc client,这里采用libevent实现。protobuf其实也有一个不好的地方,高版本的protobuf无法兼容低版本的protobuf:前面讲到protobuf需要采用3.6.1版本以与brpc兼容,现在客户端Window上没有brpc,想着采用高版本的protobuf应该也没问题,实际是不行的。另外,Windows下采用vs2022编译protobuf3.6.1时必须要添加如下预定义_SILENCE_STDEXT_HASH_DEPRECATION_WARNINGS才能编译通过。
libevent是一种基于事件的异步网络库,就算是客户端也需要调用event_base_dispatch来循环进行事件派发,一般会将其放在一个线程中运行。在bufferevent创建好之后,通过evutil_socketpair创建一个本地sockpair,通过回调的方式在event_base_dispatch线程中调用上层的rpcChannel对象的函数,具体TcpClient实现如下:
bufferevent_setcb(m_bev, TcpClient::readCallBack, TcpClient::writeCallBack, TcpClient::event_cb, nullptr);
evutil_socketpair(AF_INET, SOCK_STREAM, 0, m_fdPair);
event *m_ev = event_new(m_base, m_fdPair[1], EV_READ | EV_PERSIST, TcpClient::onMessage, (void *)m_bev; //2
bool TcpClient::sendTcpMessage(char *totalBuf, unsigned long len)
{
int sendLen = ::send(m_fdPair[0], totalBuf, len, 0);
return sendLen == len;
}
void TcpClient::onMessage(evutil_socket_t fd, short what, void *arg)
{
char buf[1024] = {0};
int len = ::recv(fd, buf, sizeof(buf), 0);
struct bufferevent *bev = (struct bufferevent *)arg;
bufferevent_write(bev, buf, len);
}
void TcpClient::readCallBack(struct bufferevent *bev, void *arg)
{
char buf[4094] = {0};
size_t readLen = bufferevent_read(bev, buf, sizeof(buf));
if (readLen && gReadCallBack)
gReadCallBack(buf, readLen);
}
b)rpc client实现
一次rpc client调用过程的关系如下图,CallMethod方法的逻辑如下,里面的RpcClientController是继承自google::protobuf::RpcController,用于设置错误码。resquest与response对象也都是由pb工具生成,均继承自google::protobuf::Message。
void RpcClientChannel::CallMethod(const google::protobuf::MethodDescriptor *method,
google::protobuf::RpcController *controller,
const google::protobuf::Message *request,
google::protobuf::Message *response,
google::protobuf::Closure *done)
{
std::shared_ptr totalBufPtr = nullptr;
TcpClientCallBack::constructMessage(method, request, totalBufPtr);
if (!m_tcpClientPtr->sendTcpMessage(totalBufPtr->data(), totalBufPtr->size())){
controller->SetFailed("sendTcpMessage error");
return;
}
std::unique_lock ulk(gMx);
gCv.wait_for(ulk, std::chrono::seconds(m_timeOut), [](){ return gBuf[0] != ' ' && gLen != 0; });
if (' ' == gBuf[0] || 0 == gLen){
controller->SetFailed("sendTcpMessage timeout");
return;
}
TcpClientCallBack::parseMessage(controller, response);
resetBuf();
}
c)baidu_std协议转换
baidu_std协议可参考brpc/docs/cn/baidu_std.md at master · apache/brpc (github.com) 。具体转换如下,需要注意的是,不要尝试将发送的数据帧后面再补一个' ',这样会导致后面的数据全部会解析失败。转换要用到brpc::policy::RpcMeta及brpc::policy::RpcRequestMeta等结构,这些是编译brpc生成的。
void TcpClientCallBack::constructMessage(const google::protobuf::MethodDescriptor *method,
const google::protobuf::Message *request,
std::shared_ptr &totalBufPtr)
{
std::unique_ptr rpcMetaPtr(new brpc::policy::RpcMeta());
brpc::policy::RpcRequestMeta *pRpcRequestMeta = new brpc::policy::RpcRequestMeta();
pRpcRequestMeta->set_service_name(method->service()->name());
pRpcRequestMeta->set_method_name(method->name());
rpcMetaPtr->set_allocated_request(pRpcRequestMeta); //RpcMeta析构时会自动析构RpcRequestMeta
std::string rpcMetaStr = rpcMetaPtr->SerializeAsString();
std::string sendbuf = request->SerializeAsString();
unsigned long packBodyLen = rpcMetaStr.size() + sendbuf.size();
char packBodyLenStr[4] = {0};
packBodyLenStr[3] = packBodyLen & 0x000000FF;
packBodyLenStr[2] = packBodyLen & 0x0000FF00;
packBodyLenStr[1] = packBodyLen & 0x00FF0000;
packBodyLenStr[0] = packBodyLen & 0xFF000000;
unsigned long packBodyMetaLen = rpcMetaStr.size();
char packBodyLenMetaStr[4] = {0};
packBodyLenMetaStr[3] = packBodyMetaLen & 0x000000FF;
packBodyLenMetaStr[2] = packBodyMetaLen & 0x0000FF00;
packBodyLenMetaStr[1] = packBodyMetaLen & 0x00FF0000;
packBodyLenMetaStr[0] = packBodyMetaLen & 0xFF000000;
totalBufPtr.reset(new std::vector(12 + packBodyLen, ' '));
memcpy(totalBufPtr->data(), "PRPC", 4);
memcpy(totalBufPtr->data() + 4, packBodyLenStr, 4);
memcpy(totalBufPtr->data() + 8, packBodyLenMetaStr, 4);
memcpy(totalBufPtr->data() + 12, rpcMetaStr.c_str(), rpcMetaStr.size());
memcpy(totalBufPtr->data() + 12 + rpcMetaStr.size(), sendbuf.c_str(), sendbuf.size());
}
void TcpClientCallBack::parseMessage(google::protobuf::RpcController *controller,
google::protobuf::Message *response)
{
if (gLen SetFailed("total size SetFailed("protocol is not PRPC");
return;
}
int packBodyLen = 0;
packBodyLen |= gBuf[4] & 0xFF