套接字客户端recv()始终返回0

时间:2016-04-20 12:22:06

标签: c++ multithreading sockets proxy fork

我正在尝试创建一个HTTP代理,根据HTTP请求中的GET / CONNET主机名,某些连接的优先级高于其他连接。

这个想法是根据给定的主机名列表来完成具有更高优先级的请求,每个主机名都有一定的优先级。

待连接将由 accepter 线程存储在四个不同的队列中(每个队列对应一个优先级:最大,中等,最小和未分类);然后, accepter fork()一个子进程,它将按优先级顺序出列并处理挂起的连接。通过这样做, accepter 线程将始终接受新连接和每个排队连接 简而言之,这是我的代理:

  • main :打开TCP套接字,绑定到给定端口,侦听最多10个连接,调用线程 accepter 将套接字fd传递给前一个{{1这个线程的调用和连接;
  • accepter :此线程获取从 main 传递的套接字fd,并从socket()循环返回客户端套接字,accept()来自客户端,解析请求并根据HTTP请求中的主机名,我的自定义结构将排入正确的队列;然后它将recv(),因此一个进程将出列并处理连接;
  • manageConnection :此进程由 accepter 分叉,从队列中出列,检查解析主机名字段的弹出结构,打开套接字客户端,连接到服务器,GET或CONNECT,将满足请求。

新代理:不再fork(),我创建了一个包含四个线程的线程池(一个“接受者”和三个“连接器”:因为我打算将此代理放在上面我的RPi 2,有一个四核处理器,我以为至少有四个线程是好的)。我现在有一个fork()和两个mutex。除线程,互斥锁和条件变量外,代码几乎相同。这些是线程调用的新函数:

  • 入队:此线程包含condition_variables循环,它从客户端接收,解析HTTP请求,查找主机名并根据其优先级排队{{ 1}} struct(代码开头的typedefed);

  • dequeue :这个线程包含出队和管理连接循环,它从队列中获取accept()结构,检索客户端套接字(我从{{1 })},解析主机名并管理GET或CONNECT请求。

问题:总是一样的,当谈到管理CONNECT请求时,来自客户端的info_conn总是返回0:我知道当连接的另一端有recv()时返回0断开连接,但这不是我想要的! 基于线程方法,这是一个微不足道的生产者/消费者问题(弹出并推送到队列)所以我认为排队和出队的线程交替是正确的。

我的(新)代码

info_conn

环境:我的笔记本电脑运行Ubuntu 14.04,x86_64;使用SwitchyOmega插件在Chrome上测试代理,该插件允许重定向特定端口(我将传递给我的代理的同一端口)上的流量,使用accept()编译。

输出(针对Netflix和YouTube试用,它们都有同样的问题,即recv()#include <stdio.h> #include <string.h> #include <stdlib.h> #include <sys/socket.h> #include <sys/types.h> #include <sys/time.h> #include <arpa/inet.h> #include <unistd.h> #include <thread> #include <iostream> #include <netdb.h> #include <queue> #include <list> #include <vector> #include <condition_variable> #include <cstdlib> using namespace std; #define GET 0 #define CONNECT 1 #define DEFAULTCOLOR "\033[0m" #define RED "\033[22;31m" #define YELLOW "\033[1;33m" #define GREEN "\033[0;0;32m" #define MAX_SIZE 1000 #define CONNECT_200_OK "HTTP/1.1 200 Connection established\r\nProxy-agent: myproxy\r\n\r\n" // my custom struct stored in queues typedef struct info_connection { int client_fd; string host; string payload; int request; } info_conn; queue<info_conn>q1; queue<info_conn>q2; queue<info_conn>q3; queue<info_conn>q4; vector<thread> workers; condition_variable cond_read, cond_write; mutex mtx; void enqueue(int sock_client); void dequeue(void); int main(int argc, char *argv[]) { int socket_desc; struct sockaddr_in server; socket_desc = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); if (socket_desc == -1) { perror("socket()"); exit(-1); } server.sin_family = AF_INET; server.sin_addr.s_addr = INADDR_ANY; if (argc == 2) server.sin_port = htons(atoi(argv[1])); printf("listening to port %d\n", atoi(argv[1])); if (bind(socket_desc,(struct sockaddr *)&server, sizeof(server)) < 0) { perror("bind failed. Error"); exit(-1); } printf("binded\n"); listen(socket_desc, 10); printf("listen\n"); // thread pool, because I suck at forking workers.push_back(thread(enqueue, socket_desc)); workers.push_back(thread(dequeue)); workers.push_back(thread(dequeue)); workers.push_back(thread(dequeue)); for (thread& t : workers) { t.join(); } return 0; } void enqueue(int sock_client) { printf("enqueue()\n"); int client_sock; struct sockaddr_in *client_struct; unsigned int clilen; bzero((char*)&client_struct, sizeof(client_struct)); clilen = sizeof(client_struct); char host_name[128]; char buff[4096]; int n_recv, n_send; char *start_row, *end_row, *tmp_ptr, *tmp_start; int req; while( (client_sock = accept(sock_client, (struct sockaddr *)&client_struct, &clilen)) ) { memset(host_name, 0, sizeof(host_name)); n_recv = recv(client_sock, buff, sizeof(buff), 0); if (n_recv < 0) { perror("recv()"); break; } start_row = end_row = buff; while ((end_row = strstr(start_row, "\r\n")) != NULL) { int row_len = end_row - start_row; if (row_len == 0) break; if (strncmp(buff, "GET ", 4) == 0) { req = GET; tmp_start = start_row + 4; tmp_ptr = strstr(tmp_start, "//"); int len = tmp_ptr - tmp_start; tmp_start = tmp_start + len + 2; tmp_ptr = strchr(tmp_start, '/'); len = tmp_ptr - tmp_start; strncpy(host_name, tmp_start, len); break; } else if (strncmp(buff, "CONNECT ", 8) == 0) { req = CONNECT; tmp_start = start_row + 8; tmp_ptr = strchr(tmp_start, ':'); int host_len = tmp_ptr - tmp_start; strncpy(host_name, tmp_start, host_len); break; } start_row = end_row + 2; /* if ((start_row - buff) >= strlen(buff)) break;*/ } unique_lock<mutex> locker(mtx, defer_lock); locker.lock(); cond_write.wait(locker, [](){ return (q1.size() < MAX_SIZE || q2.size() < MAX_SIZE || q3.size() < MAX_SIZE || q4.size() < MAX_SIZE); }); cout << "(DEBUG) thread " << this_thread::get_id() << " wants to insert, queues not full " << q1.size() << ' ' << q2.size() << ' ' << q3.size() << ' ' << q4.size() << '\n'; int priority = 0; info_conn info_c; info_c.client_fd = client_sock; info_c.host = host_name; info_c.request = req; info_c.payload = string(buff); cout << "(DEBUG) thread " << this_thread::get_id() << " looking for " << host_name << " queues" << '\n'; if (strcmp(host_name, "www.netflix.com") == 0) { priority = 1; printf("hostname = www.netflix.com, priority %d\n", priority); q1.push(info_c); } else if (strcmp(host_name, "www.youtube.com") == 0) { priority = 2; printf("hostname = www.youtube.com, priority %d\n", priority); q2.push(info_c); } else if (strcmp(host_name, "www.facebook.com") == 0) { priority = 3; printf("hostname = www.facebook.com, priority %d\n", priority); q3.push(info_c); } else { priority = 4; printf("hostname %s not found in queues\n", host_name); q4.push(info_c); } cout << GREEN << "(DEBUG) thread " << this_thread::get_id() << " inserted " << q1.size() << ' ' << q2.size() << ' ' << q3.size() << ' ' << q4.size() << DEFAULTCOLOR<< '\n'; locker.unlock(); cond_read.notify_all(); } if (client_sock < 0) { perror("accept failed"); exit(-1); } } void dequeue(void) { int fd_client = -1; int fd_server = -1; struct sockaddr_in server; int what_request; char host_name[128]; char buffer[1500]; int n_send, n_recv; size_t length; info_conn req; // CONNECT int r, max; int send_200_OK; int read_from_client = 0; int read_from_server = 0; int send_to_client = 0; int send_to_server = 0; struct timeval timeout; char buff[8192]; fd_set fdset; printf("dequeue()\n"); while (true) { unique_lock<mutex> locker(mtx, defer_lock); locker.lock(); cond_read.wait(locker, [](){ return (q1.size() > 0 || q2.size() > 0 || q3.size() > 0 || q4.size() > 0); }); cout << "(DEBUG) thread " << this_thread::get_id() << " wants to remove, queues not empty " << q1.size() << ' ' << q2.size() << ' ' << q3.size() << ' ' << q4.size() << '\n'; if (q1.size() > 0) { req = q1.front(); q1.pop(); } else if (q2.size() > 0) { req = q2.front(); q2.pop(); } else if (q3.size() > 0) { req = q3.front(); q3.pop(); } else if (q4.size() > 0) { req = q4.front(); q4.pop(); } cout << YELLOW <<"(DEBUG) thread " << this_thread::get_id() << " removed, " << q1.size() << ' ' << q2.size() << ' ' << q3.size() << ' ' << q4.size() << DEFAULTCOLOR<<'\n'; locker.unlock(); // notify one, because I have only one "producer" thread cond_write.notify_one(); fd_client = req.client_fd; //memcpy(host_name, req.host.c_str(), strlen(req.host)); length = req.host.copy(host_name, req.host.size(), 0); host_name[length] = '\0'; what_request = req.request; //memcpy(buffer, req.payload, req.payload.size()); length = req.payload.copy(buffer, req.payload.size(), 0); buffer[length] = '\0'; what_request = req.request; //cout << RED <<"(DEBUG) thread " << this_thread::get_id() << " copied packet payload " << // buffer << DEFAULTCOLOR<<'\n'; struct addrinfo* result; struct addrinfo* res; int error; struct sockaddr_in *resolve; fd_server = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP); if (fd_server < 0) { perror("socket()"); exit(-1); } cout << "(DEBUG) thread " << this_thread::get_id() << " fd_server " << fd_server << '\n'; error = getaddrinfo(host_name, NULL, NULL, &result); if (error != 0) { if (error == EAI_SYSTEM) { perror("getaddrinfo"); } else { fprintf(stderr, "error in getaddrinfo for (%s): %s\n", host_name, gai_strerror(error)); } exit(EXIT_FAILURE); } if (what_request == GET) { server.sin_port = htons(80); } else if (what_request == CONNECT) { server.sin_port = htons(443); } server.sin_family = AF_INET; cout << "(DEBUG) thread " << this_thread::get_id() << " getaddrinfo()" << '\n'; for (res = result; res != NULL; res = res->ai_next) { if (res->ai_family == AF_INET) { resolve = (struct sockaddr_in *)res->ai_addr; //char *ip = inet_ntoa(resolve->sin_addr); //printf("%s\n", ip); server.sin_addr.s_addr = resolve->sin_addr.s_addr; if (connect(fd_server, (struct sockaddr *)&server, sizeof (struct sockaddr_in)) < 0) { fflush(stdout); perror("connect()"); } else { cout << "(DEBUG) thread " << this_thread::get_id() << " connected to " << inet_ntoa(server.sin_addr) << '\n'; } break; } } // dealing with GET if (what_request == GET) { cout << "thread " << this_thread::get_id() << " dealing GET " << host_name << " sending to server " << buffer << '\n'; n_send = send(fd_server, buffer, strlen(buffer)+1, 0); if (n_send < 0) { cout << "thread " << this_thread::get_id() << " error sending GET request to server" << '\n'; perror("send()"); break; } do { memset(buffer, 0, sizeof(buffer)); n_recv = recv(fd_server, buffer, sizeof(buffer), 0); cout << "thread " << this_thread::get_id() << " GET: " << host_name << " read from recv() " << n_recv << " bytes, " << fd_client << "<->" << fd_server << '\n'; n_send = send(fd_client, buffer, n_recv, 0); } while (n_recv > 0); if (n_recv < 0) { cout << RED << "thread " << this_thread::get_id() << " error sending GET response from server to client" << DEFAULTCOLOR<<'\n'; perror("send()"); break; } close(fd_client); close(fd_server); cout << "thread " << this_thread::get_id() << " done with GET request, quitting\n"; } // dealing with CONNECT else if (what_request == CONNECT) { cout << "thread " << this_thread::get_id() << " dealing CONNECT " << host_name << '\n'; max = fd_server >= fd_client ? fd_server+1 : fd_client+1; send_200_OK = send(fd_client, CONNECT_200_OK, sizeof(CONNECT_200_OK), 0); if (send_200_OK < 0) { perror("send() 200 OK to client"); break; } cout << "thread " << this_thread::get_id() << " SENT 200 OK to client " << '\n'; int tot_recvd; int tot_sent; // TCP tunnel while(true) { memset(buff, 0, sizeof(buff)); FD_ZERO(&fdset); FD_SET(fd_client, &fdset); FD_SET(fd_server, &fdset); timeout.tv_sec = 15; timeout.tv_usec = 0; r = select(max, &fdset, NULL, NULL, &timeout); if (r < 0) { perror("select()"); close(fd_client); close(fd_server); break; } if (r == 0) { // select timed out printf("tunnel(): select() request timeout 408\n"); close(fd_client); close(fd_server); break; } if (FD_ISSET(fd_client, &fdset)) { tot_recvd = 0; tot_sent = 0; do { read_from_client = recv(fd_client, &(buff[tot_recvd]), sizeof(buff), 0); tot_recvd += read_from_client; cout << "thread " << this_thread::get_id() << " select(), reading from client " << fd_client << " " << read_from_client << " bytes, " << fd_client<< " <-> " << fd_server<<'\n'; if (buff[tot_recvd-1] == '\0') { break; } } while (read_from_client > 0); if (read_from_client < 0) { perror("recv()"); close(fd_client); close(fd_server); break; } if (read_from_client == 0) { // this always happens!!! } send_to_server = send(fd_server, buff, read_from_client, 0); if (send_to_server < 0) { perror("send() to client"); close(fd_client); close(fd_server); break; } } if (FD_ISSET(fd_server, &fdset)) { tot_recvd = 0; tot_sent = 0; do { read_from_server = recv(fd_server, &(buff[tot_recvd]), sizeof(buff), 0); tot_recvd += read_from_server; cout << "thread " << this_thread::get_id() << " select(), reading from server " << fd_client << " " << read_from_server << " bytes, " << fd_client<< " <-> " << fd_server<<'\n'; if (buff[tot_recvd-1] == '\0') { break; } } while (read_from_server > 0); if (read_from_server < 0) { perror("read()"); close(fd_client); close(fd_server); break; } if (read_from_server == 0) { cout << "thread " << this_thread::get_id() << " select(), server closed conn" << '\n'; close(fd_client); close(fd_server); break; } send_to_client = send(fd_client, buff, read_from_server, 0); if (send_to_client < 0) { perror("send() to client"); close(fd_client); close(fd_server); break; } } } cout << "thread " << this_thread::get_id() << " done with CONNECT request\n"; } } } 返回0):

g++ -std=c++11 -pedantic -Wall -o funwithproxyfork funwithproxyfork.cpp -lpthread

然后它什么也没说。

1 个答案:

答案 0 :(得分:-1)

通过检查代码,这似乎是正常的,并且由于HTTP / 1.1的工作方式。

您可能正在使用一些支持HTTP / 1.1流水线操作的客户端。当HTTP / 1.1流水线操作生效时,服务器将保持连接打开,以防客户端想要发送另一个请求。如果客户端没有,则客户端关闭连接。

您的代码似乎希望服务器在响应HTTP请求后关闭连接,并且您不希望客户端首先关闭其连接端。 HTTP / 1.1不是这样,您可以让客户端或服务器首先关闭连接。关闭连接的任何一个都是正常的。

所以,这里没有问题,除了我在评论中单独提到的几个问题,与手头的recv()问题无关。此外,许多地方的代码都没有充分检查send()的返回值,并假设已发送所有请求的字节。这是错误的。您不能保证send()将发送所请求的确切字节数。它实际上可以发送更少,这在返回值中表示,返回值是实际发送的字节数。

此代理将在高流量负载下开始失败,因为将发送的字节数比请求的少,但代码无法检测到这种情况,并正确处理它。例如,它将从服务器读取2000个字节,尝试将它们发送到客户端send()报告发送了1000个字节,代码继续其快乐方式,并且客户端最终没有收到整个响应从服务器。接连发生了。

此外,这里还有一些其他竞争条件可能导致代理与完全支持流水线操作的HTTP / 1.1客户端“楔入”或锁定。但如果你开始遇到这些问题,这将是另一个要问的问题......