socket servidor programacion hacer conexion como cliente c# .net networking tcp scalability

servidor - socket tcp c#



Cómo escribir un servidor basado en Tcp/Ip escalable (18)

To be clear, i''m looking for .net based solutions (C# if possible, but any .net language will work)

You are not going to get the highest level of scalability if you go purely with .NET. GC pauses can hamper the latency.

I''m going to need to start at least one thread for the service. I am considering using the Asynch API (BeginRecieve, etc..) since I don''t know how many clients I will have connected at any given time (possibly hundreds). I definitely do not want to start a thread for each connection.

Overlapped IO is generally considered to be Windows'' fastest API for network communication. I don''t know if this the same as your Asynch API. Do not use select as each call needs to check every socket that is open instead of having callbacks on active sockets.

Estoy en la fase de diseño de escribir una nueva aplicación de servicio de Windows que acepte conexiones TCP / IP para conexiones de larga ejecución (esto no es como HTTP donde hay muchas conexiones cortas, sino que un cliente se conecta y permanece conectado durante horas o días o incluso semanas).

Estoy buscando ideas para la mejor manera de diseñar la arquitectura de red. Voy a necesitar comenzar al menos un hilo para el servicio. Estoy considerando utilizar Asynch API (BeginRecieve, etc.) ya que no sé cuántos clientes me habré conectado en un momento dado (posiblemente cientos). Definitivamente no quiero comenzar un hilo por cada conexión.

Los datos llegarán principalmente a los clientes desde mi servidor, pero en ocasiones se enviarán algunos comandos de los clientes. Esta es principalmente una aplicación de monitoreo en la cual mi servidor envía datos de estado periódicamente a los clientes.

¿Alguna sugerencia sobre la mejor manera de hacer esto lo más escalable posible? Flujo de trabajo básico? Gracias.

EDITAR: Para ser claros, estoy buscando soluciones basadas en .NET (C # si es posible, pero cualquier lenguaje .net funcionará)

NOTA DE BOUNTY: Para recibir la recompensa, espero más que una simple respuesta. Necesitaría un ejemplo práctico de una solución, ya sea como un puntero a algo que podría descargar o un breve ejemplo en línea. Y debe estar basado en .net y Windows (cualquier lenguaje .net es aceptable)

EDITAR: Quiero agradecer a todos los que dieron buenas respuestas. Desafortunadamente, solo pude aceptar uno, y elegí aceptar el método Begin / End más conocido. La solución de Esac puede ser mejor, pero aún es lo suficientemente nueva como para no saber con certeza cómo funcionará.

He votado a favor de todas las respuestas que creía que eran buenas, ojalá pudiera hacer más por ustedes. Gracias de nuevo.


Hay muchas formas de hacer operaciones de red en C #. Todos ellos utilizan diferentes mecanismos bajo el capó, y por lo tanto sufren problemas de rendimiento importantes con una alta concurrencia. Las operaciones de Begin * son una de estas que mucha gente confunde por ser la forma más rápida / más rápida de hacer networking.

Para resolver estos problemas, introdujeron el * conjunto de métodos Async: de MSDN http://msdn.microsoft.com/en-us/library/system.net.sockets.socketasynceventargs.aspx

La clase SocketAsyncEventArgs es parte de un conjunto de mejoras para System.Net.Sockets .. ::. Clase de socket que proporciona un patrón asincrónico alternativo que puede ser utilizado por aplicaciones especializadas de socket de alto rendimiento. Esta clase se diseñó específicamente para aplicaciones de servidor de red que requieren alto rendimiento. Una aplicación puede usar el patrón asíncrono mejorado exclusivamente o solo en áreas calientes específicas (por ejemplo, cuando se reciben grandes cantidades de datos).

La característica principal de estas mejoras es la evitación de la asignación repetida y la sincronización de objetos durante la E / S de socket asíncrona de alto volumen. El patrón de diseño Begin / End actualmente implementado por System.Net.Sockets.. ::. La clase Socket requiere que se asigne un objeto System .. ::. IAsyncResult para cada operación de socket asíncrono.

Debajo de las coberturas, * Async API usa puertos de terminación IO, que es la manera más rápida de realizar operaciones de red, consulte http://msdn.microsoft.com/en-us/magazine/cc302334.aspx

Y solo para ayudarte, incluyo el código fuente para un servidor de telnet que escribí usando * Async API. Solo estoy incluyendo las partes relevantes. Además, para tener en cuenta, en lugar de procesar los datos en línea, en su lugar, opto por insertarlos en una cola libre de bloqueo (espera) que se procesa en un hilo separado. Tenga en cuenta que no incluyo la clase Pool correspondiente que es simplemente un grupo simple que creará un nuevo objeto si está vacío, y la clase Buffer que es solo un buffer autoexpandible que no es realmente necesario a menos que esté recibiendo un indeterminista la cantidad de datos. Si desea obtener más información, siéntase libre de enviarme un PM.

public class Telnet { private readonly Pool<SocketAsyncEventArgs> m_EventArgsPool; private Socket m_ListenSocket; /// <summary> /// This event fires when a connection has been established. /// </summary> public event EventHandler<SocketAsyncEventArgs> Connected; /// <summary> /// This event fires when a connection has been shutdown. /// </summary> public event EventHandler<SocketAsyncEventArgs> Disconnected; /// <summary> /// This event fires when data is received on the socket. /// </summary> public event EventHandler<SocketAsyncEventArgs> DataReceived; /// <summary> /// This event fires when data is finished sending on the socket. /// </summary> public event EventHandler<SocketAsyncEventArgs> DataSent; /// <summary> /// This event fires when a line has been received. /// </summary> public event EventHandler<LineReceivedEventArgs> LineReceived; /// <summary> /// Specifies the port to listen on. /// </summary> [DefaultValue(23)] public int ListenPort { get; set; } /// <summary> /// Constructor for Telnet class. /// </summary> public Telnet() { m_EventArgsPool = new Pool<SocketAsyncEventArgs>(); ListenPort = 23; } /// <summary> /// Starts the telnet server listening and accepting data. /// </summary> public void Start() { IPEndPoint endpoint = new IPEndPoint(0, ListenPort); m_ListenSocket = new Socket(endpoint.AddressFamily, SocketType.Stream, ProtocolType.Tcp); m_ListenSocket.Bind(endpoint); m_ListenSocket.Listen(100); // // Post Accept // StartAccept(null); } /// <summary> /// Not Yet Implemented. Should shutdown all connections gracefully. /// </summary> public void Stop() { //throw (new NotImplementedException()); } // // ACCEPT // /// <summary> /// Posts a requests for Accepting a connection. If it is being called from the completion of /// an AcceptAsync call, then the AcceptSocket is cleared since it will create a new one for /// the new user. /// </summary> /// <param name="e">null if posted from startup, otherwise a <b>SocketAsyncEventArgs</b> for reuse.</param> private void StartAccept(SocketAsyncEventArgs e) { if (e == null) { e = m_EventArgsPool.Pop(); e.Completed += Accept_Completed; } else { e.AcceptSocket = null; } if (m_ListenSocket.AcceptAsync(e) == false) { Accept_Completed(this, e); } } /// <summary> /// Completion callback routine for the AcceptAsync post. This will verify that the Accept occured /// and then setup a Receive chain to begin receiving data. /// </summary> /// <param name="sender">object which posted the AcceptAsync</param> /// <param name="e">Information about the Accept call.</param> private void Accept_Completed(object sender, SocketAsyncEventArgs e) { // // Socket Options // e.AcceptSocket.NoDelay = true; // // Create and setup a new connection object for this user // Connection connection = new Connection(this, e.AcceptSocket); // // Tell the client that we will be echo''ing data sent // DisableEcho(connection); // // Post the first receive // SocketAsyncEventArgs args = m_EventArgsPool.Pop(); args.UserToken = connection; // // Connect Event // if (Connected != null) { Connected(this, args); } args.Completed += Receive_Completed; PostReceive(args); // // Post another accept // StartAccept(e); } // // RECEIVE // /// <summary> /// Post an asynchronous receive on the socket. /// </summary> /// <param name="e">Used to store information about the Receive call.</param> private void PostReceive(SocketAsyncEventArgs e) { Connection connection = e.UserToken as Connection; if (connection != null) { connection.ReceiveBuffer.EnsureCapacity(64); e.SetBuffer(connection.ReceiveBuffer.DataBuffer, connection.ReceiveBuffer.Count, connection.ReceiveBuffer.Remaining); if (connection.Socket.ReceiveAsync(e) == false) { Receive_Completed(this, e); } } } /// <summary> /// Receive completion callback. Should verify the connection, and then notify any event listeners /// that data has been received. For now it is always expected that the data will be handled by the /// listeners and thus the buffer is cleared after every call. /// </summary> /// <param name="sender">object which posted the ReceiveAsync</param> /// <param name="e">Information about the Receive call.</param> private void Receive_Completed(object sender, SocketAsyncEventArgs e) { Connection connection = e.UserToken as Connection; if (e.BytesTransferred == 0 || e.SocketError != SocketError.Success || connection == null) { Disconnect(e); return; } connection.ReceiveBuffer.UpdateCount(e.BytesTransferred); OnDataReceived(e); HandleCommand(e); Echo(e); OnLineReceived(connection); PostReceive(e); } /// <summary> /// Handles Event of Data being Received. /// </summary> /// <param name="e">Information about the received data.</param> protected void OnDataReceived(SocketAsyncEventArgs e) { if (DataReceived != null) { DataReceived(this, e); } } /// <summary> /// Handles Event of a Line being Received. /// </summary> /// <param name="connection">User connection.</param> protected void OnLineReceived(Connection connection) { if (LineReceived != null) { int index = 0; int start = 0; while ((index = connection.ReceiveBuffer.IndexOf(''/n'', index)) != -1) { string s = connection.ReceiveBuffer.GetString(start, index - start - 1); s = s.Backspace(); LineReceivedEventArgs args = new LineReceivedEventArgs(connection, s); Delegate[] delegates = LineReceived.GetInvocationList(); foreach (Delegate d in delegates) { d.DynamicInvoke(new object[] { this, args }); if (args.Handled == true) { break; } } if (args.Handled == false) { connection.CommandBuffer.Enqueue(s); } start = index; index++; } if (start > 0) { connection.ReceiveBuffer.Reset(0, start + 1); } } } // // SEND // /// <summary> /// Overloaded. Sends a string over the telnet socket. /// </summary> /// <param name="connection">Connection to send data on.</param> /// <param name="s">Data to send.</param> /// <returns>true if the data was sent successfully.</returns> public bool Send(Connection connection, string s) { if (String.IsNullOrEmpty(s) == false) { return Send(connection, Encoding.Default.GetBytes(s)); } return false; } /// <summary> /// Overloaded. Sends an array of data to the client. /// </summary> /// <param name="connection">Connection to send data on.</param> /// <param name="data">Data to send.</param> /// <returns>true if the data was sent successfully.</returns> public bool Send(Connection connection, byte[] data) { return Send(connection, data, 0, data.Length); } public bool Send(Connection connection, char c) { return Send(connection, new byte[] { (byte)c }, 0, 1); } /// <summary> /// Sends an array of data to the client. /// </summary> /// <param name="connection">Connection to send data on.</param> /// <param name="data">Data to send.</param> /// <param name="offset">Starting offset of date in the buffer.</param> /// <param name="length">Amount of data in bytes to send.</param> /// <returns></returns> public bool Send(Connection connection, byte[] data, int offset, int length) { bool status = true; if (connection.Socket == null || connection.Socket.Connected == false) { return false; } SocketAsyncEventArgs args = m_EventArgsPool.Pop(); args.UserToken = connection; args.Completed += Send_Completed; args.SetBuffer(data, offset, length); try { if (connection.Socket.SendAsync(args) == false) { Send_Completed(this, args); } } catch (ObjectDisposedException) { // // return the SocketAsyncEventArgs back to the pool and return as the // socket has been shutdown and disposed of // m_EventArgsPool.Push(args); status = false; } return status; } /// <summary> /// Sends a command telling the client that the server WILL echo data. /// </summary> /// <param name="connection">Connection to disable echo on.</param> public void DisableEcho(Connection connection) { byte[] b = new byte[] { 255, 251, 1 }; Send(connection, b); } /// <summary> /// Completion callback for SendAsync. /// </summary> /// <param name="sender">object which initiated the SendAsync</param> /// <param name="e">Information about the SendAsync call.</param> private void Send_Completed(object sender, SocketAsyncEventArgs e) { e.Completed -= Send_Completed; m_EventArgsPool.Push(e); } /// <summary> /// Handles a Telnet command. /// </summary> /// <param name="e">Information about the data received.</param> private void HandleCommand(SocketAsyncEventArgs e) { Connection c = e.UserToken as Connection; if (c == null || e.BytesTransferred < 3) { return; } for (int i = 0; i < e.BytesTransferred; i += 3) { if (e.BytesTransferred - i < 3) { break; } if (e.Buffer[i] == (int)TelnetCommand.IAC) { TelnetCommand command = (TelnetCommand)e.Buffer[i + 1]; TelnetOption option = (TelnetOption)e.Buffer[i + 2]; switch (command) { case TelnetCommand.DO: if (option == TelnetOption.Echo) { // ECHO } break; case TelnetCommand.WILL: if (option == TelnetOption.Echo) { // ECHO } break; } c.ReceiveBuffer.Remove(i, 3); } } } /// <summary> /// Echoes data back to the client. /// </summary> /// <param name="e">Information about the received data to be echoed.</param> private void Echo(SocketAsyncEventArgs e) { Connection connection = e.UserToken as Connection; if (connection == null) { return; } // // backspacing would cause the cursor to proceed beyond the beginning of the input line // so prevent this // string bs = connection.ReceiveBuffer.ToString(); if (bs.CountAfterBackspace() < 0) { return; } // // find the starting offset (first non-backspace character) // int i = 0; for (i = 0; i < connection.ReceiveBuffer.Count; i++) { if (connection.ReceiveBuffer[i] != ''/b'') { break; } } string s = Encoding.Default.GetString(e.Buffer, Math.Max(e.Offset, i), e.BytesTransferred); if (connection.Secure) { s = s.ReplaceNot("/r/n/b".ToCharArray(), ''*''); } s = s.Replace("/b", "/b /b"); Send(connection, s); } // // DISCONNECT // /// <summary> /// Disconnects a socket. /// </summary> /// <remarks> /// It is expected that this disconnect is always posted by a failed receive call. Calling the public /// version of this method will cause the next posted receive to fail and this will cleanup properly. /// It is not advised to call this method directly. /// </remarks> /// <param name="e">Information about the socket to be disconnected.</param> private void Disconnect(SocketAsyncEventArgs e) { Connection connection = e.UserToken as Connection; if (connection == null) { throw (new ArgumentNullException("e.UserToken")); } try { connection.Socket.Shutdown(SocketShutdown.Both); } catch { } connection.Socket.Close(); if (Disconnected != null) { Disconnected(this, e); } e.Completed -= Receive_Completed; m_EventArgsPool.Push(e); } /// <summary> /// Marks a specific connection for graceful shutdown. The next receive or send to be posted /// will fail and close the connection. /// </summary> /// <param name="connection"></param> public void Disconnect(Connection connection) { try { connection.Socket.Shutdown(SocketShutdown.Both); } catch (Exception) { } } /// <summary> /// Telnet command codes. /// </summary> internal enum TelnetCommand { SE = 240, NOP = 241, DM = 242, BRK = 243, IP = 244, AO = 245, AYT = 246, EC = 247, EL = 248, GA = 249, SB = 250, WILL = 251, WONT = 252, DO = 253, DONT = 254, IAC = 255 } /// <summary> /// Telnet command options. /// </summary> internal enum TelnetOption { Echo = 1, SuppressGoAhead = 3, Status = 5, TimingMark = 6, TerminalType = 24, WindowSize = 31, TerminalSpeed = 32, RemoteFlowControl = 33, LineMode = 34, EnvironmentVariables = 36 } }


He escrito algo similar a esto en el pasado. A partir de mi investigación, hace años, demostré que escribir la implementación de tu propio socket era la mejor opción, utilizando los sockets asincrónicos. Esto significaba que los clientes que realmente no hacían nada realmente requerían relativamente pocos recursos. Cualquier cosa que ocurra es manejada por el grupo de hilos de .net.

Lo escribí como una clase que gestiona todas las conexiones para los servidores.

Simplemente utilicé una lista para guardar todas las conexiones de clientes, pero si necesita búsquedas más rápidas para listas más grandes, puede escribirlo como desee.

private List<xConnection> _sockets;

También necesita que el zócalo realmente escuche las conexiones entrantes.

private System.Net.Sockets.Socket _serverSocket;

El método de inicio realmente inicia el socket del servidor y comienza a escuchar las conexiones entrantes.

public bool Start() { System.Net.IPHostEntry localhost = System.Net.Dns.GetHostEntry(System.Net.Dns.GetHostName()); System.Net.IPEndPoint serverEndPoint; try { serverEndPoint = new System.Net.IPEndPoint(localhost.AddressList[0], _port); } catch (System.ArgumentOutOfRangeException e) { throw new ArgumentOutOfRangeException("Port number entered would seem to be invalid, should be between 1024 and 65000", e); } try { _serverSocket = new System.Net.Sockets.Socket(serverEndPoint.Address.AddressFamily, SocketType.Stream, ProtocolType.Tcp); } catch (System.Net.Sockets.SocketException e) { throw new ApplicationException("Could not create socket, check to make sure not duplicating port", e); } try { _serverSocket.Bind(serverEndPoint); _serverSocket.Listen(_backlog); } catch (Exception e) { throw new ApplicationException("Error occured while binding socket, check inner exception", e); } try { //warning, only call this once, this is a bug in .net 2.0 that breaks if // you''re running multiple asynch accepts, this bug may be fixed, but // it was a major pain in the ass previously, so make sure there is only one //BeginAccept running _serverSocket.BeginAccept(new AsyncCallback(acceptCallback), _serverSocket); } catch (Exception e) { throw new ApplicationException("Error occured starting listeners, check inner exception", e); } return true; }

Me gustaría notar que el código de manejo de excepciones se ve mal, pero la razón es que tenía un código de supresión de excepción para que cualquier excepción se suprimiera y devolviera false si se configuraba una opción de configuración, pero quería eliminarla por el bien de la brevedad.

El _serverSocket.BeginAccept (new AsyncCallback (acceptCallback)), _serverSocket) esencialmente configura nuestro servidor socket para llamar al método acceptCallback cada vez que un usuario se conecta. Este método se ejecuta desde .Net threadpool, que maneja automáticamente la creación de subprocesos de trabajo adicionales si tiene muchas operaciones de bloqueo. Esto debería manejar de manera óptima cualquier carga en el servidor.

private void acceptCallback(IAsyncResult result) { xConnection conn = new xConnection(); try { //Finish accepting the connection System.Net.Sockets.Socket s = (System.Net.Sockets.Socket)result.AsyncState; conn = new xConnection(); conn.socket = s.EndAccept(result); conn.buffer = new byte[_bufferSize]; lock (_sockets) { _sockets.Add(conn); } //Queue recieving of data from the connection conn.socket.BeginReceive(conn.buffer, 0, conn.buffer.Length, SocketFlags.None, new AsyncCallback(ReceiveCallback), conn); //Queue the accept of the next incomming connection _serverSocket.BeginAccept(new AsyncCallback(acceptCallback), _serverSocket); } catch (SocketException e) { if (conn.socket != null) { conn.socket.Close(); lock (_sockets) { _sockets.Remove(conn); } } //Queue the next accept, think this should be here, stop attacks based on killing the waiting listeners _serverSocket.BeginAccept(new AsyncCallback(acceptCallback), _serverSocket); } catch (Exception e) { if (conn.socket != null) { conn.socket.Close(); lock (_sockets) { _sockets.Remove(conn); } } //Queue the next accept, think this should be here, stop attacks based on killing the waiting listeners _serverSocket.BeginAccept(new AsyncCallback(acceptCallback), _serverSocket); } }

El código anterior básicamente acaba de aceptar la conexión que entra, pone en cola BeginReceive que es una devolución de llamada que se ejecutará cuando el cliente envía datos, y luego acceptCallback cola el siguiente acceptCallback que aceptará la próxima conexión de cliente que entre.

La BeginReceive método BeginReceive es lo que le dice al socket qué hacer cuando recibe datos del cliente. Para BeginReceive , debe darle una matriz de bytes, que es donde copiará los datos cuando el cliente envíe datos. Se ReceiveCallback método ReceiveCallback , que es la forma en que manejamos la recepción de datos.

private void ReceiveCallback(IAsyncResult result) { //get our connection from the callback xConnection conn = (xConnection)result.AsyncState; //catch any errors, we''d better not have any try { //Grab our buffer and count the number of bytes receives int bytesRead = conn.socket.EndReceive(result); //make sure we''ve read something, if we haven''t it supposadly means that the client disconnected if (bytesRead > 0) { //put whatever you want to do when you receive data here //Queue the next receive conn.socket.BeginReceive(conn.buffer, 0, conn.buffer.Length, SocketFlags.None, new AsyncCallback(ReceiveCallback), conn); } else { //Callback run but no data, close the connection //supposadly means a disconnect //and we still have to close the socket, even though we throw the event later conn.socket.Close(); lock (_sockets) { _sockets.Remove(conn); } } } catch (SocketException e) { //Something went terribly wrong //which shouldn''t have happened if (conn.socket != null) { conn.socket.Close(); lock (_sockets) { _sockets.Remove(conn); } } } }

EDITAR: en este patrón olvidé mencionar que en esta área de código:

//put whatever you want to do when you receive data here //Queue the next receive conn.socket.BeginReceive(conn.buffer, 0, conn.buffer.Length, SocketFlags.None, new AsyncCallback(ReceiveCallback), conn);

Lo que generalmente haría es en el código que desee, haga el reensamblaje de paquetes en mensajes y luego créelos como trabajos en el grupo de subprocesos. De esta forma, el BeginReceive del siguiente bloque del cliente no se demora mientras se ejecuta el código de procesamiento de mensajes.

La devolución de llamada de aceptación termina de leer el socket de datos llamando a la recepción final. Esto llena el buffer provisto en la función begin receive. Una vez que hagas lo que quieras donde he dejado el comentario, llamaremos al siguiente método BeginReceive que ejecutará la devolución de llamada nuevamente si el cliente envía más datos. Ahora esta es la parte realmente difícil, cuando el cliente envía datos, su devolución de llamada de recepción solo se puede invocar con parte del mensaje. El reensamblaje puede volverse muy complicado. Usé mi propio método y creé una especie de protocolo propietario para hacer esto. Lo dejé afuera, pero si me lo pides, puedo agregarlo. Este controlador fue realmente el código más complicado que haya escrito.

public bool Send(byte[] message, xConnection conn) { if (conn != null && conn.socket.Connected) { lock (conn.socket) { //we use a blocking mode send, no async on the outgoing //since this is primarily a multithreaded application, shouldn''t cause problems to send in blocking mode conn.socket.Send(bytes, bytes.Length, SocketFlags.None); } } else return false; return true; }

El método de envío anterior realmente utiliza una llamada de Send sincrónica, para mí eso estuvo bien debido al tamaño de los mensajes y la naturaleza multiproceso de mi aplicación. Si desea enviar a cada cliente, simplemente necesita recorrer la Lista _sockets.

La clase xConnection que ve a la que se hace referencia arriba es básicamente una envoltura simple para que un socket incluya el buffer de bytes, y en mi implementación algunos extras.

public class xConnection : xBase { public byte[] buffer; public System.Net.Sockets.Socket socket; }

También para referencia aquí están los que using s que incluyo ya que siempre me molesta cuando no están incluidos.

using System.Net.Sockets;

Espero que sea útil, puede que no sea el código más limpio, pero funciona. También hay algunos matices en el código que debería estar cansado de cambiar. Por un lado, solo tiene un solo BeginAccept llamado en cualquier momento. Solía ​​haber un muy molesto error de .net en esto, que fue hace años, así que no recuerdo los detalles.

Además, en el código ReceiveCallback , procesamos todo lo recibido desde el socket antes de ReceiveCallback cola la próxima recepción. Esto significa que para un solo socket, en realidad solo ReceiveCallback una vez en cualquier momento, y no necesitamos usar la sincronización de subprocesos. Sin embargo, si reordena esto para llamar a la siguiente recepción inmediatamente después de extraer los datos, que pueden ser un poco más rápidos, deberá asegurarse de sincronizar correctamente los hilos.

Además, eliminé gran parte de mi código, pero dejé la esencia de lo que sucedía en su lugar. Este debería ser un buen comienzo para su diseño. Deja un comentario si tienes más preguntas al respecto.


Solía ​​haber una discusión muy buena de TCP / IP escalable usando .NET escrita por Chris Mullins de Coversant, desafortunadamente parece que su blog ha desaparecido de su ubicación anterior, así que trataré de reconstruir sus consejos de memoria (algunos comentarios útiles) de su aparición en este hilo: C ++ vs. C #: Desarrollo de un servidor IOCP altamente escalable )

Ante todo, tenga en cuenta que tanto el uso de Begin/End y los métodos Async en la clase Socket hacen uso de IO Completion Ports (IOCP) para proporcionar escalabilidad. Esto hace una diferencia mucho mayor (cuando se utiliza correctamente, ver a continuación) a la escalabilidad que cuál de los dos métodos que realmente elige para implementar su solución.

Las publicaciones de Chris Mullins se basaron en el uso de Begin/End , que es con el que personalmente tengo experiencia. Tenga en cuenta que Chris creó una solución basada en esto que amplió hasta 10,000s de conexiones simultáneas de clientes en una máquina de 32 bits con 2GB de memoria y hasta 100,000 en una plataforma de 64 bits con suficiente memoria. Desde mi propia experiencia con esta técnica (aunque no está cerca de este tipo de carga) no tengo ninguna razón para dudar de estas cifras indicativas.

IOCP versus subprocesos por conexión o primitivas ''select''

La razón por la que desea utilizar un mecanismo que utiliza IOCP es que utiliza un grupo de subprocesos de Windows de muy bajo nivel que no activa ningún subproceso hasta que haya datos reales en el canal de E / S desde el que intenta leer ( tenga en cuenta que IOCP también puede usarse para archivos IO). El beneficio de esto es que Windows no tiene que cambiar a un hilo solo para encontrar que todavía no hay datos, por lo que reduce el número de conmutadores de contexto que su servidor deberá realizar al mínimo requerido.

Los interruptores de contexto es lo que definitivamente matará el mecanismo ''hilo por conexión'', aunque esta es una solución viable si solo se trata de unas pocas docenas de conexiones. Sin embargo, este mecanismo no es "escalable".

Consideraciones importantes al usar IOCP

Memoria

En primer lugar, es fundamental comprender que IOCP puede generar fácilmente problemas de memoria en .NET si su implementación es demasiado ingenua. Cada llamada IOCP BeginReceive dará como resultado "fijación" del búfer al que está leyendo. Para una buena explicación de por qué esto es un problema, vea: Weblog de Yun Jin: OutOfMemoryException y Pinning .

Afortunadamente, este problema se puede evitar, pero requiere un poco de compromiso. La solución sugerida es asignar un búfer grande byte[] al inicio de la aplicación (o cerca de ella), de al menos 90 KB o así (a partir de .NET 2, el tamaño requerido puede ser mayor en las versiones posteriores). La razón para hacer esto es que las asignaciones grandes de memoria terminan automáticamente en un segmento de memoria no compactante (el gran montón de objetos) que se fija automáticamente. Al asignar un búfer grande al inicio, asegúrese de que este bloque de memoria inamovible se encuentre en una "dirección relativamente baja" donde no se interponga y cause fragmentación.

A continuación, puede usar desplazamientos para segmentar este gran búfer en áreas separadas para cada conexión que necesita leer algunos datos. Aquí es donde entra en juego un intercambio; dado que este búfer necesita ser preasignado, tendrá que decidir cuánto espacio de búfer necesita por conexión, y qué límite superior desea establecer en la cantidad de conexiones a las que desea escalar (o puede implementar una abstracción) que puede asignar búferes anclados adicionales una vez que los necesite).

La solución más simple sería asignar a cada conexión un solo byte en un desplazamiento único dentro de este búfer. Luego puede hacer una llamada BeginReceive para leer un byte único y realizar el resto de la lectura como resultado de la devolución de llamada que recibe.

Tratamiento

Cuando recibe la devolución de llamada desde la llamada a Begin que realizó, es muy importante darse cuenta de que el código de la devolución de llamada se ejecutará en el subproceso IOCP de bajo nivel. Es absolutamente esencial que evite largas operaciones en esta devolución de llamada. El uso de estos subprocesos para el procesamiento complejo eliminará su escalabilidad con la misma eficacia que el uso de ''hilo por conexión''.

La solución sugerida es utilizar la devolución de llamada solo para poner en cola un elemento de trabajo para procesar los datos entrantes, que se ejecutarán en otro hilo. Evite las posibles operaciones de bloqueo dentro de la devolución de llamada para que el hilo IOCP pueda volver a su grupo lo más rápido posible. En .NET 4.0 sugeriría que la solución más fácil es generar una Task , dándole una referencia al socket del cliente y una copia del primer byte que ya fue leído por la llamada BeginReceive . Esta tarea es entonces responsable de leer todos los datos del socket que representan la solicitud que está procesando, ejecutarlos y luego hacer una nueva llamada BeginReceive para poner en cola el socket para IOCP una vez más. Pre .NET 4.0, puede usar ThreadPool o crear su propia implementación de cola de trabajo con hilos.

Resumen

Básicamente, sugiero usar el código de muestra de Kevin para esta solución, con las siguientes advertencias adicionales:

  • Asegúrate de que el buffer que pasas a BeginReceive ya está " BeginReceive "
  • Asegúrese de que la devolución de llamada que pasa a BeginReceive no hace más que poner en cola una tarea para manejar el procesamiento real de los datos entrantes.

Cuando haces eso, no tengo dudas de que podrías replicar los resultados de Chris en escalar hasta potencialmente cientos de miles de clientes simultáneos (dado el hardware correcto y una implementación eficiente de tu propio código de procesamiento, por supuesto;)


Have you considered just using a WCF net TCP binding and a publish/subscribe pattern ? WCF would allow you to focus [mostly] on your domain instead of plumbing..

There are lots of WCF samples & even a publish/subscribe framework available on IDesign''s download section which may be useful : http://www.idesign.net


I am wondering about one thing:

I definitely do not want to start a thread for each connection.

¿Porqué es eso? Windows could handle hundreds of threads in an application since at least Windows 2000. I''ve done it, it''s really easy to work with if the threads don''t need to be synchronized. Especially given that you''re doing a lot of I/O (so you''re not CPU-bound, and a lot of threads would be blocked on either disk or network communication), I don''t understand this restriction.

Have you tested the multi-threaded way and found it lacking in something? Do you intend to also have a database connection for each thread (that would kill the database server, so it''s a bad idea, but it''s easily solved with a 3-tier design). Are you worried that you''ll have thousands of clients instead of hundreds, and then you''ll really have problems? (Though I''d try a thousand threads or even ten thousand if I had 32+ GB of RAM - again, given that you''re not CPU bound, thread switch time should be absolutely irrelevant.)

Here is the code - to see how this looks running, go to http://mdpopescu.blogspot.com/2009/05/multi-threaded-server.html and click on the picture.

Server class:

public class Server { private static readonly TcpListener listener = new TcpListener(IPAddress.Any, 9999); public Server() { listener.Start(); Console.WriteLine("Started."); while (true) { Console.WriteLine("Waiting for connection..."); var client = listener.AcceptTcpClient(); Console.WriteLine("Connected!"); // each connection has its own thread new Thread(ServeData).Start(client); } } private static void ServeData(object clientSocket) { Console.WriteLine("Started thread " + Thread.CurrentThread.ManagedThreadId); var rnd = new Random(); try { var client = (TcpClient) clientSocket; var stream = client.GetStream(); while (true) { if (rnd.NextDouble() < 0.1) { var msg = Encoding.ASCII.GetBytes("Status update from thread " + Thread.CurrentThread.ManagedThreadId); stream.Write(msg, 0, msg.Length); Console.WriteLine("Status update from thread " + Thread.CurrentThread.ManagedThreadId); } // wait until the next update - I made the wait time so small ''cause I was bored :) Thread.Sleep(new TimeSpan(0, 0, rnd.Next(1, 5))); } } catch (SocketException e) { Console.WriteLine("Socket exception in thread {0}: {1}", Thread.CurrentThread.ManagedThreadId, e); } } }

Server main program:

namespace ManyThreadsServer { internal class Program { private static void Main(string[] args) { new Server(); } } }

Client class:

public class Client { public Client() { var client = new TcpClient(); client.Connect(IPAddress.Loopback, 9999); var msg = new byte[1024]; var stream = client.GetStream(); try { while (true) { int i; while ((i = stream.Read(msg, 0, msg.Length)) != 0) { var data = Encoding.ASCII.GetString(msg, 0, i); Console.WriteLine("Received: {0}", data); } } } catch (SocketException e) { Console.WriteLine("Socket exception in thread {0}: {1}", Thread.CurrentThread.ManagedThreadId, e); } } }

Client main program:

using System; using System.Threading; namespace ManyThreadsClient { internal class Program { private static void Main(string[] args) { // first argument is the number of threads for (var i = 0; i < Int32.Parse(args[0]); i++) new Thread(RunClient).Start(); } private static void RunClient() { new Client(); } } }


I used Kevin''s solution but he says that solution lacks code for reassembly of messages. Developers can use this code for reassembly of messages:

private static void ReceiveCallback(IAsyncResult asyncResult ) { ClientInfo cInfo = (ClientInfo)asyncResult.AsyncState; cInfo.BytesReceived += cInfo.Soket.EndReceive(asyncResult); if (cInfo.RcvBuffer == null) { // First 2 byte is lenght if (cInfo.BytesReceived >= 2) { //this calculation depends on format which your client use for lenght info byte[] len = new byte[ 2 ] ; len[0] = cInfo.LengthBuffer[1]; len[1] = cInfo.LengthBuffer[0]; UInt16 length = BitConverter.ToUInt16( len , 0); // buffering and nulling is very important cInfo.RcvBuffer = new byte[length]; cInfo.BytesReceived = 0; } } else { if (cInfo.BytesReceived == cInfo.RcvBuffer.Length) { //Put your code here, use bytes comes from "cInfo.RcvBuffer" //Send Response but don''t use async send , otherwise your code will not work ( RcvBuffer will be null prematurely and it will ruin your code) int sendLenghts = cInfo.Soket.Send( sendBack, sendBack.Length, SocketFlags.None); // buffering and nulling is very important //Important , set RcvBuffer to null because code will decide to get data or 2 bte lenght according to RcvBuffer''s value(null or initialized) cInfo.RcvBuffer = null; cInfo.BytesReceived = 0; } } ContinueReading(cInfo); } private static void ContinueReading(ClientInfo cInfo) { try { if (cInfo.RcvBuffer != null) { cInfo.Soket.BeginReceive(cInfo.RcvBuffer, cInfo.BytesReceived, cInfo.RcvBuffer.Length - cInfo.BytesReceived, SocketFlags.None, ReceiveCallback, cInfo); } else { cInfo.Soket.BeginReceive(cInfo.LengthBuffer, cInfo.BytesReceived, cInfo.LengthBuffer.Length - cInfo.BytesReceived, SocketFlags.None, ReceiveCallback, cInfo); } } catch (SocketException se) { //Handle exception and Close socket here, use your own code return; } catch (Exception ex) { //Handle exception and Close socket here, use your own code return; } } class ClientInfo { private const int BUFSIZE = 1024 ; // Max size of buffer , depends on solution private const int BUFLENSIZE = 2; // lenght of lenght , depends on solution public int BytesReceived = 0 ; public byte[] RcvBuffer { get; set; } public byte[] LengthBuffer { get; set; } public Socket Soket { get; set; } public ClientInfo(Socket clntSock) { Soket = clntSock; RcvBuffer = null; LengthBuffer = new byte[ BUFLENSIZE ]; } } public static void AcceptCallback(IAsyncResult asyncResult) { Socket servSock = (Socket)asyncResult.AsyncState; Socket clntSock = null; try { clntSock = servSock.EndAccept(asyncResult); ClientInfo cInfo = new ClientInfo(clntSock); Receive( cInfo ); } catch (SocketException se) { clntSock.Close(); } } private static void Receive(ClientInfo cInfo ) { try { if (cInfo.RcvBuffer == null) { cInfo.Soket.BeginReceive(cInfo.LengthBuffer, 0, 2, SocketFlags.None, ReceiveCallback, cInfo); } else { cInfo.Soket.BeginReceive(cInfo.RcvBuffer, 0, cInfo.BytesReceived, SocketFlags.None, ReceiveCallback, cInfo); } } catch (SocketException se) { return; } catch (Exception ex) { return; } }




I would use the AcceptAsync/ConnectAsync/ReceiveAsync/SendAsync methods that were added in .Net 3.5. I have done a benchmark and they are approximately 35% faster (response time and bitrate) with 100 users constantly sending and receiving data.


I''ve got such a server running in some of my solutions. Here is a very detail explanation of the different ways to do it in .net: Get Closer to the Wire with High-Performance Sockets in .NET

Lately I''ve been looking for ways to improve our code and will be looking into this: " Socket Performance Enhancements in Version 3.5 " that was included specifically "for use by applications that use asynchronous network I/O to achieve the highest performance".

"The main feature of these enhancements is the avoidance of the repeated allocation and synchronization of objects during high-volume asynchronous socket I/O. The Begin/End design pattern currently implemented by the Socket class for asynchronous socket I/O requires a System.IAsyncResult object be allocated for each asynchronous socket operation."

You can keep reading if you follow the link. I personally will be testing their sample code tomorrow to benchmark it against what i''ve got.

Edit: Here you can find working code for both client and server using the new 3.5 SocketAsyncEventArgs so you can test it within a couple minutes and go thru the code. It is a simple approach but is the basis for starting a much larger implementation. Also this article from almost two years ago in MSDN Magazine was a interesting read.


Using .NET''s integrated Async IO ( BeginRead , etc) is a good idea if you can get all the details right. When you properly set up your socket/file handles it will use the OS''s underlying IOCP implementation, allowing your operations to complete without using any threads (or, in the worst case, using a thread that I believe comes from the kernel''s IO thread pool instead of .NET''s thread pool, which helps alleviate threadpool congestion.)

The main gotcha is to make sure that you open your sockets/files in non-blocking mode. Most of the default convenience functions (like File.OpenRead ) don''t do this, so you''ll need to write your own.

One of the other main concerns is error handling - properly handling errors when writing asynchronous I/O code is much, much harder than doing it in synchronous code. It''s also very easy to end up with race conditions and deadlocks even though you may not be using threads directly, so you need to be aware of this.

If possible, you should try and use a convenience library to ease the process of doing scalable asynchronous IO.

Microsoft''s Concurrency Coordination Runtime is one example of a .NET library designed to ease the difficulty of doing this kind of programming. It looks great, but as I haven''t used it, I can''t comment on how well it would scale.

For my personal projects that need to do asynchronous network or disk I/O, I use a set of .NET concurrency/IO tools that I''ve built over the past year, called Squared.Task . It''s inspired by libraries like imvu.task and twisted , and I''ve included some working examples in the repository that do network I/O. I also have used it in a few applications I''ve written - the largest publicly released one being NDexer (which uses it for threadless disk I/O). The library was written based on my experience with imvu.task and has a set of fairly comprehensive unit tests, so I strongly encourage you to try it out. If you have any issues with it, I''d be glad to offer you some assistance.

In my opinion, based on my experience using asynchronous/threadless IO instead of threads is a worthwhile endeavor on the .NET platform, as long as you''re ready to deal with the learning curve. It allows you to avoid the scalability hassles imposed by the cost of Thread objects, and in many cases, you can completely avoid the use of locks and mutexes by making careful use of concurrency primitives like Futures/Promises.


Well, .NET sockets seem to provide select() - that''s best for handling input. For output I''d have a pool of socket-writer threads listening on a work queue, accepting socket descriptor/object as part of the work item, so you don''t need a thread per socket.


You already got the most part of the answer via the code samples above. Using asynchronous IO operation is absolutely the way to go here. Async IO is the way the Win32 is designed internally to scale. The best possible performance you can get is achieved using Completion Ports, binding your sockets to completion ports and have a thread pool waiting for completion port completion. The common wisdom is to have 2-4 threads per CPU(core) waiting for completion. I highly recommend to go over these three articles by Rick Vicik from the Windows Performance team:

  1. Designing Applications for Performance - Part 1
  2. Designing Applications for Performance - Part 2
  3. Designing Applications for Performance - Part 3

The said articles cover mostly the native Windows API, but they are a must read for anyone trying to get a grasp at scalability and performance. They do have some briefs on the managed side of things too.

Second thing you''ll need to do is make sure you go over the Improving .NET Application Performance and Scalability book, that is available online. You will find pertinent and valid advice around the use of threads, asynchronous calls and locks in Chapter 5. But the real gems are in Chapter 17 where you''ll find such goodies as practical guidance on tuning your thread pool. My apps had some serious problems until I adjusted the maxIothreads/maxWorkerThreads as per the recommendations in this chapter.

You say that you want to do a pure TCP server, so my next point is spurious. However , if you find yourself cornered and use the WebRequest class and its derivatives, be warned that there is a dragon guarding that door: the ServicePointManager . This is a configuration class that has one purpose in life: to ruin your performance. Make sure you free your server from the artificial imposed ServicePoint.ConnectionLimit or your application will never scale (I let you discover urself what is the default value...). You may also reconsider the default policy of sending an Expect100Continue header in the http requests.

Now about the core socket managed API things are fairly easy on the Send side, but they are significantly more complex on the Receive side. In order to achieve high throughput and scale you must ensure that the socket is not flow controlled because you do not have a buffer posted for receive. Ideally for high performance you should post ahead 3-4 buffers and post new buffers as soon as you get one back ( before you process the one got back) so you ensure that the socket always has somewhere to deposit the data coming from the network. You''ll see why you probably won''t be able to achieve this shortly.

After you''re done playing with the BeginRead/BeginWrite API and start the serious work you''ll realize that you need security on your traffic, ie. NTLM/Kerberos authentication and traffic encryption, or at least traffic tampering protection. The way you do this is you use the built in System.Net.Security.NegotiateStream (or SslStream if you need to go cross disparate domains). This means that instead of relying on straight socket asynchronous operations you will rely on the AuthenticatedStream asynchronous operations. As soon as you obtain a socket (either from connect on client or from accept on server) you create a stream on the socket and submit it for authentication, by calling either BeginAuthenticateAsClient or BeginAuthenticateAsServer. After the authentication completes (at least your safe from the native InitiateSecurityContext/AcceptSecurityContext madness...) you will do your authorization by checking the RemoteIdentity property of your Authenticated stream and doing whatever ACL verification your product must support. After that you will send messages using the BeginWrite and you''ll be receiving them with BeginRead. This is the problem I was talking before that you won''t be able to post multiple receive buffers, because the AuthenticateStream classes don''t support this. The BeginRead operation manages internally all the IO until you have received an entire frame, otherwise it could not handle the the message authentication (decrypt frame and validate signature on frame). Though in my experience the job done by the AuthenticatedStream classes is fairly good and shouldn''t have any problem with it. Es decir. you should be able to saturate GB network with only 4-5% CPU. The AuthenticatedStream classes will also impose on you the protocol specific frame size limitations (16k for SSL, 12k for Kerberos).

This should get you started on the right track. I''m not going to post code here, there is a perfectly good example on MSDN . I''ve done many projects like this and I was able to scale to about 1000 users connected without problems. Above that you''ll need to modify registry keys to allow the kernel for more socket handles. and make sure you deploy on a server OS, that is W2K3 not XP or Vista (ie. client OS), it makes a big difference.

BTW make sure if you have databases operations on the server or file IO you also use the async flavor for them, or you''ll drain the thread pool in no time. For SQL Server connections make sure you add the ''Asyncronous Processing=true'' to the connection string.



You can use Push Framework open source framework for high-performance server development. It is built on IOCP and is suitable for push scenarios and message broadcast.

http://www.pushframework.com


You could try using a framework called ACE (Adaptive Communications Environment) which is a generic C++ framework for network servers. It''s a very solid, mature product and is designed to support high-reliability, high-volume applications up to telco-grade.

The framework deals with quite a wide range of concurrency models and probably has one suitable for your applciation out of the box. This should make the system easier to debug as most of the nasty concurrency issues have already been sorted out. The trade-off here is that the framework is written in C++ and is not the most warm and fluffy of code bases. On the other hand, you get tested, industrial grade network infrastructure and a highly scalable architecture out of the box.


to people copy pasting the accepted answer, you can rewrite the acceptCallback method, removing all calls of _serverSocket.BeginAccept(new AsyncCallback(acceptCallback), _serverSocket); and put it in a finally{} clause, this way:

private void acceptCallback(IAsyncResult result) { xConnection conn = new xConnection(); try { //Finish accepting the connection System.Net.Sockets.Socket s = (System.Net.Sockets.Socket)result.AsyncState; conn = new xConnection(); conn.socket = s.EndAccept(result); conn.buffer = new byte[_bufferSize]; lock (_sockets) { _sockets.Add(conn); } //Queue recieving of data from the connection conn.socket.BeginReceive(conn.buffer, 0, conn.buffer.Length, SocketFlags.None, new AsyncCallback(ReceiveCallback), conn); } catch (SocketException e) { if (conn.socket != null) { conn.socket.Close(); lock (_sockets) { _sockets.Remove(conn); } } } catch (Exception e) { if (conn.socket != null) { conn.socket.Close(); lock (_sockets) { _sockets.Remove(conn); } } } finally { //Queue the next accept, think this should be here, stop attacks based on killing the waiting listeners _serverSocket.BeginAccept(new AsyncCallback(acceptCallback), _serverSocket); } }

you could even remove the first catch since its content is the same but it''s a template method and you should use typed exception to better handle the exceptions and understand what caused the error, so just implement those catches with some useful code