c# stream jpeg frame rtp

c# - Guardando el archivo JPEG proveniente de Network Camera RTP Stream



frame (1)

Ver mi implementación en https://net7mma.codeplex.com/SourceControl/latest#Rtp/RFC2435Frame.cs

Es mucho más simple que la implementación anterior y tiene una clase para RtspClient y RtpClient si es necesario

un experto

#region Methods /// <summary> /// Writes the packets to a memory stream and creates the default header and quantization tables if necessary. /// Assigns Image from the result /// </summary> internal virtual void ProcessPackets(bool allowLegacyPackets = false) { if (!Complete) return; byte TypeSpecific, Type, Quality; ushort Width, Height, RestartInterval = 0, RestartCount = 0; uint FragmentOffset; //A byte which is bit mapped, each bit indicates 16 bit coeffecients for the table . byte PrecisionTable = 0; ArraySegment<byte> tables = default(ArraySegment<byte>); Buffer = new System.IO.MemoryStream(); //Loop each packet foreach (RtpPacket packet in m_Packets.Values) { //Payload starts at the offset of the first PayloadOctet int offset = packet.NonPayloadOctets; if (packet.Extension) throw new NotSupportedException("RFC2035 nor RFC2435 defines extensions."); //Decode RtpJpeg Header TypeSpecific = (packet.Payload.Array[packet.Payload.Offset + offset++]); FragmentOffset = (uint)(packet.Payload.Array[packet.Payload.Offset + offset++] << 16 | packet.Payload.Array[packet.Payload.Offset + offset++] << 8 | packet.Payload.Array[packet.Payload.Offset + offset++]); #region RFC2435 - The Type Field /* 4.1. The Type Field The Type field defines the abbreviated table-specification and additional JFIF-style parameters not defined by JPEG, since they are not present in the body of the transmitted JPEG data. Three ranges of the type field are currently defined. Types 0-63 are reserved as fixed, well-known mappings to be defined by this document and future revisions of this document. Types 64-127 are the same as types 0-63, except that restart markers are present in the JPEG data and a Restart Marker header appears immediately following the main JPEG header. Types 128-255 are free to be dynamically defined by a session setup protocol (which is beyond the scope of this document). Of the first group of fixed mappings, types 0 and 1 are currently defined, along with the corresponding types 64 and 65 that indicate the presence of restart markers. They correspond to an abbreviated table-specification indicating the "Baseline DCT sequential" mode, 8-bit samples, square pixels, three components in the YUV color space, standard Huffman tables as defined in [1, Annex K.3], and a single interleaved scan with a scan component selector indicating components 1, 2, and 3 in that order. The Y, U, and V color planes correspond to component numbers 1, 2, and 3, respectively. Component 1 (i.e., the luminance plane) uses Huffman table number 0 and quantization table number 0 (defined below) and components 2 and 3 (i.e., the chrominance planes) use Huffman table number 1 and quantization table number 1 (defined below). Type numbers 2-5 are reserved and SHOULD NOT be used. Applications based on previous versions of this document (RFC 2035) should be updated to indicate the presence of restart markers with type 64 or 65 and the Restart Marker header. The two RTP/JPEG types currently defined are described below: horizontal vertical Quantization types component samp. fact. samp. fact. table number +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | 1 (Y) | 2 | 1 | 0 | | 0, 64 | 2 (U) | 1 | 1 | 1 | | | 3 (V) | 1 | 1 | 1 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | 1 (Y) | 2 | 2 | 0 | | 1, 65 | 2 (U) | 1 | 1 | 1 | | | 3 (V) | 1 | 1 | 1 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ These sampling factors indicate that the chrominance components of type 0 video is downsampled horizontally by 2 (often called 4:2:2) while the chrominance components of type 1 video are downsampled both horizontally and vertically by 2 (often called 4:2:0). Types 0 and 1 can be used to carry both progressively scanned and interlaced image data. This is encoded using the Type-specific field in the main JPEG header. The following values are defined: 0 : Image is progressively scanned. On a computer monitor, it can be displayed as-is at the specified width and height. 1 : Image is an odd field of an interlaced video signal. The height specified in the main JPEG header is half of the height of the entire displayed image. This field should be de- interlaced with the even field following it such that lines from each of the images alternate. Corresponding lines from the even field should appear just above those same lines from the odd field. 2 : Image is an even field of an interlaced video signal. 3 : Image is a single field from an interlaced video signal, but it should be displayed full frame as if it were received as both the odd & even fields of the frame. On a computer monitor, each line in the image should be displayed twice, doubling the height of the image. */ #endregion Type = (packet.Payload.Array[packet.Payload.Offset + offset++]); //Check for a RtpJpeg Type of less than 5 used in RFC2035 for which RFC2435 is the errata if (!allowLegacyPackets && Type >= 2 && Type <= 5) { //Should allow for 2035 decoding seperately throw new ArgumentException("Type numbers 2-5 are reserved and SHOULD NOT be used. Applications based on RFC 2035 should be updated to indicate the presence of restart markers with type 64 or 65 and the Restart Marker header."); } Quality = packet.Payload.Array[packet.Payload.Offset + offset++]; Width = (ushort)(packet.Payload.Array[packet.Payload.Offset + offset++] * 8);// in 8 pixel multiples Height = (ushort)(packet.Payload.Array[packet.Payload.Offset + offset++] * 8);// in 8 pixel multiples //It is worth noting Rtp does not care what you send and more tags such as comments and or higher resolution pictures may be sent and these values will simply be ignored. //Restart Interval 64 - 127 if (Type > 63 && Type < 128) { /* This header MUST be present immediately after the main JPEG header when using types 64-127. It provides the additional information required to properly decode a data stream containing restart markers. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Restart Interval |F|L| Restart Count | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ */ RestartInterval = (ushort)(packet.Payload.Array[packet.Payload.Offset + offset++] << 8 | packet.Payload.Array[packet.Payload.Offset + offset++]); RestartCount = (ushort)((packet.Payload.Array[packet.Payload.Offset + offset++] << 8 | packet.Payload.Array[packet.Payload.Offset + offset++]) & 0x3fff); } // A Q value of 255 denotes that the quantization table mapping is dynamic and can change on every frame. // Decoders MUST NOT depend on any previous version of the tables, and need to reload these tables on every frame. if (/*FragmentOffset == 0 || */Buffer.Position == 0) { //RFC2435 http://tools.ietf.org/search/rfc2435#section-3.1.8 //3.1.8. Quantization Table header /* This header MUST be present after the main JPEG header (and after the Restart Marker header, if present) when using Q values 128-255. It provides a way to specify the quantization tables associated with this Q value in-band. */ if (Quality == 0) throw new InvalidOperationException("(Q)uality = 0 is Reserved."); else if (Quality >= 100) { /* http://tools.ietf.org/search/rfc2435#section-3.1.8 * Quantization Table Header * ------------------------- 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | MBZ | Precision | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Quantization Table Data | | ... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ */ if ((packet.Payload.Array[packet.Payload.Offset + offset++]) != 0) { //Must Be Zero is Not Zero if (System.Diagnostics.Debugger.IsAttached) System.Diagnostics.Debugger.Break(); } //Read the PrecisionTable (notes below) PrecisionTable = (packet.Payload.Array[packet.Payload.Offset + offset++]); #region RFC2435 Length Field /* The Length field is set to the length in bytes of the quantization table data to follow. The Length field MAY be set to zero to indicate that no quantization table data is included in this frame. See section 4.2 for more information. If the Length field in a received packet is larger than the remaining number of bytes, the packet MUST be discarded. When table data is included, the number of tables present depends on the JPEG type field. For example, type 0 uses two tables (one for the luminance component and one shared by the chrominance components). Each table is an array of 64 values given in zig-zag order, identical to the format used in a JFIF DQT marker segment. * PrecisionTable * For each quantization table present, a bit in the Precision field specifies the size of the coefficients in that table. If the bit is zero, the coefficients are 8 bits yielding a table length of 64 bytes. If the bit is one, the coefficients are 16 bits for a table length of 128 bytes. For 16 bit tables, the coefficients are presented in network byte order. The rightmost bit in the Precision field (bit 15 in the diagram above) corresponds to the first table and each additional table uses the next bit to the left. Bits beyond those corresponding to the tables needed by the type in use MUST be ignored. */ #endregion //Length of all tables ushort Length = (ushort)(packet.Payload.Array[packet.Payload.Offset + offset++] << 8 | packet.Payload.Array[packet.Payload.Offset + offset++]); //If there is Table Data Read it from the payload, Length should never be larger than 128 * tableCount if (Length == 0 && Quality == byte.MaxValue) throw new InvalidOperationException("RtpPackets MUST NOT contain Q = 255 and Length = 0."); else if (Length > packet.Payload.Count - offset) //If the indicated length is greater than that of the packet taking into account the offset continue; // The packet must be discarded //Copy the tables present tables = new ArraySegment<byte>(packet.Payload.Array, packet.Payload.Offset + offset, (int)Length); offset += (int)Length; } else // Create them from the given Quality parameter ** Duality (Unify Branch) { tables = new ArraySegment<byte>(CreateQuantizationTables(Type, Quality, PrecisionTable)); } //Write the JFIF Header after reading or generating the QTables byte[] header = CreateJFIFHeader(Type, Width, Height, tables, PrecisionTable, RestartInterval); Buffer.Write(header, 0, header.Length); } //Write the Payload data from the offset Buffer.Write(packet.Payload.Array, packet.Payload.Offset + offset, packet.Payload.Count - (offset + packet.PaddingOctets)); } //Check for EOI Marker and write if not found if (Buffer.Position == Buffer.Length || Buffer.ReadByte() != JpegMarkers.EndOfInformation) { Buffer.WriteByte(JpegMarkers.Prefix); Buffer.WriteByte(JpegMarkers.EndOfInformation); } //Create the Image form the Buffer Image = System.Drawing.Image.FromStream(Buffer); }

Tenía una toma RTP Stream, que recibía una transmisión JPEG, desde una cámara de red de Samsung.

No sé mucho sobre cómo funciona el formato JPEG, pero sí sé que esta transmisión JFIF o JPEG entrante me está dando el encabezado JPEG

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type-specific | Fragment Offset | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type | Q | Width | Height | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ and then +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Restart Interval |F|L| Restart Count | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ and then in the first packet, there is this header +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | MBZ | Precision | Length | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Quantization Table Data | | ... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Creo que los analicé correctamente, y este es un fragmento de código, cómo ALMACEN uno el paquete de JPEG Stream.

int extraOff=0; public bool Decode(byte* data, int offset) { if (_initialized == false) { type_specific = data[offset + 0]; _frag[0] = data[offset + 3]; _frag[1] = data[offset + 2]; _frag[2] = data[offset + 1]; _frag[3] = 0x0; fragment_offset = System.BitConverter.ToInt32(_frag, 0); jpeg_type = data[offset + 4]; q = data[offset + 5]; width = data[offset + 6]; height = data[offset + 7]; _frag[0] = data[offset + 8]; _frag[1] = data[offset + 9]; restart_interval = (ushort)(System.BitConverter.ToUInt16(_frag, 0) & 0x3FF); if (width == 0) /** elphel 333 full image size more than just one byte less that < 256 **/ width = 256; byte jpegMBZ = (byte)(data[offset + 12]); byte jpegPrecision = (byte)(data[offset + 13]); int jpegLength = (int)((data[offset + 14]) * 256 + data[offset + 15]); byte[] tableData1 = new byte[64]; byte[] tableData2 = new byte[64]; for (int i = 0; i < 64; ++i) { tableData1[i] = data[offset + 16 + i]; tableData2[i] = data[offset + 16+64 + i]; } byte[] tmp = new byte[1024]; _offset = Utils.MakeHeaders(tmp,jpeg_type, width, height, tableData1, tableData2, 0); qtable = new byte[_offset]; Array.Copy(tmp, 0, _buffer, 0, _offset); _initialized = true; tmp = null; GC.Collect(); extraOff = jpegLength + 4 ; } else { _frag[0] = data[15]; //12 + 3 _frag[1] = data[14]; //12 + 2 _frag[2] = data[13]; //12 + 1] _frag[3] = 0x0; fragment_offset = System.BitConverter.ToInt32(_frag, 0); _frag[0] = data[offset + 8]; _frag[1] = data[offset + 9]; restart_interval = (ushort)(System.BitConverter.ToUInt16(_frag, 0) & 0x3FF); extraOff = 0; } return (next_fragment_offset == fragment_offset); } public unsafe bool Write(byte* data, int size, out bool sync) //Write(ref byte[] data, int size,out bool sync) { if (Decode(data, 12)) { for (int i = 24 + extraOff; i < size; ) buffer_ptr[_offset++] = data[i++]; size -= 24+extraOff; next_fragment_offset += size; sync = true; return ((data[1] >> 7) == 1); } else { _initialized = false; _offset = qtable.Length; next_fragment_offset = 0; sync = false; return false; } }

El problema que obtengo es que el archivo JPEG que guardé correctamente en mi disco duro como resultado de la concatenación de las transmisiones JPEG no se muestra correctamente, todos los previsualizadores de imágenes muestran los PRIMEROS DOS datos de paquetes entrantes, pero dejan el resto en GRIS, creo que esto es decir, los datos desde el tercer hasta el último paquete RTP no se analizan ni se guardan correctamente.

este es el marco que obtuve en http://rectsoft.net/ideerge/zzz.jpg

editado: así es como llamé a la función de escritura

size = rawBuffer.Length; if (sync == true) { unsafe { fixed (byte* p = rawBuffer) { if (_frame.Write(p, size, out sync)) //if (_frame.Write(ref _buffer, size, out sync)) { // i save my buffer to file here } } } } else if ((rawBuffer[1] >> 7) == 1) { sync = true; }

el rawBuffer se llena con mi función de recepción UDP, se comporta exactamente como lo hago con mi flujo de h264 y se parece al 100% a lo que capturé desde WIRESHARK en VLC.