improve audio streaming using sharedin audio streaming codec and player



A one-way audio transmission over a data network. It is widely used to listen to audio clips and radio from the Internet on computers, tablets and smartphones. In addition, computers at home are commonly set up to stream a user’s music collection to a digital media hub connected to a stereo or 5.1 . Listening to momentary blips in music or a conversation is annoying, and the only way to compensate for that over an erratic network such as the Internet is to get some of the audio data into the computer before you start listening to it. In streaming audio, both the client and server cooperate for uninterrupted sound. The client side stores a few seconds of sound in a buffer before it starts sending it to the speakers. Throughout the session, it continues to receive audio data ahead of time.

High-resolution audio

The industry has been transformed by digital downloads from sites such as iTunes, marking a shift away from physical media like vinyl, tapes and CDs. Formats including MP3 and AAC make it easy to buy, listen and store our tunes. With regards to sound quality, however, these formats just don’t cut the mustard. The use of lossy compression means that data is lost in the encoding process, which means resolution is sacrificed for the sake of convenience and smaller file sizes.

encoding process

Lossy codecs: Many of the more popular codecs in the software world are lossy, meaning that they reduce quality by some amount in order to achieve compression. Often, this type of compression is virtually indistinguishable from the original uncompressed sound or images, depending on the codec and the settings used.[4] Smaller data sets ease the strain on relatively expensive storage sub-systems such as non-volatile memory and hard disk, as well as write-once-read-many formats such as CD-ROMDVD andBlu-ray Disc. Lower data rates also reduce cost and improve performance when the data is transmitted.

Lossless codecs: There are also many lossless codecs which are typically used for archiving data in a compressed form while retaining all of the information present in the original stream. If preserving the original quality of the stream is more important than eliminating the correspondingly larger data sizes, lossless codecs are preferred. This is especially true if the data is to undergo further processing (for example editing) in which case the repeated application of processing (encoding and decoding) on lossy codecs will degrade the quality of the resulting data such that it is no longer identifiable (  audibly or both). Using more than one codec or encoding scheme successively can also degrade quality significantly. The decreasing cost of storage capacity and network bandwidth has a tendency to reduce the need for lossy codecs for some media.

achievement of lossless codec’s and bandwidth

It’s basically a protocol allowing media to be streamed from any bog-standard HTTP server whilst keeping some of the functionality provided by dedicated media servers. For example; HLS allows you to adjust the video quality in real-time, based on the bandwidth available to the client.

It can be used for ‘Live’ broadcasts and Video on Demand, and it’s file based nature means that it plays really well with Content Distribution Networks. Once prepared for delivery, the content needed to provide a stream is static (though a ‘Live’ stream, but it’s very nature, won’t be), so the stream can be served from any HTTP(S) server.

Great news for those in the creative industry – HLS also supports (incredibly) basic DRM in the form of AES-128 encryption. We won’t be covering that though as it’s outside the scope of this piece

The HLS Output File Structure and Codec

HLS is a truly adaptive bitrate technology. When audio is encoded to HLS multiple files are created for different bandwidths and different resolutions. The files are encoded using the mpg2_ts codec. The streams are mapped to the client in real time using an .M3u8 index file based on screen size and available bandwidth.

.enoshmink projectss




To make the system scalable and adaptable to the bandwidth of the network, the video flow is coded in different qualities. Thus, depending on the bandwidth and transfer network speed, the video will play at different qualities.

To implement this, the system must encode the video in different qualities and generate an index file that contains the locations of the different quality levels.

The client software internally manages the different qualities, making requests to the highest possible quality within the bandwidth of the network. Thus always play the video the highest possible quality, viewing lower quality on 3G networks and highest quality in Wi-Fi broadband.


sharedin Live Streaming

consists of three parts: the server component, the distribution component, and the client software.

 enoshmink projectss

1- server component

The server requires a media encoder, which can be off-the-shelf hardware, and a way to break the encoded media into segments and save them as files, which can either be software

1-1 Media Encoder

Encoding should be set to a format supported by the client device,

1-2 Stream Segmenter

sharedin stream segmenter is software can reads the Transport Stream from the local network and divides it into a series of small media files of equal duration. Even though each segment is in a separate file, audio files are made from a continuous stream which can be reconstructed seamlessly. also encrypt each media segment and create a key file

1-3 File Segmenter

The segmenter also creates an index file containing references to the individual media files. Each time the segmenter completes a new media file, the index file is updated of equal length





2- Distribution Components

The distribution system is a web server or a web caching system that delivers the media files and index files to the client over HTTP. No custom server modules are required to deliver the content, and typically very little configuration is needed on the web server.


sharedin Client Component

sharedin begins by fetching the index file, based on a URL identifying the stream. The index file in turn specifies the location of the available media files, decryption keys, and any alternate streams available. For the selected stream, sharedin client downloads each available media file in sequence. Each file contains a consecutive segment of the stream. Once it has a sufficient amount of data downloaded, the client begins presenting the reassembled stream to the user.sharedin client is responsible for fetching any decryption keys, authenticating or presenting a user interface to allow authentication, and decrypting media files as needed.This process continues until the client encounters the #EXT-X-ENDLIST tag in the index file. If no #EXT-X-ENDLIST tag is present, the index file is part of an ongoing broadcast. During ongoing broadcasts, the client loads a new version of the index file periodically. sharedin client looks for new media files and encryption keys in the updated index and adds these URLs to its queue.




Protection against errors

In this case is generated a different flows with the same quality video and locations are listed in the index file.

The management of all files is done from the client, so that if it fails the first flow, use the next and successively.

Encryption with

content can be easily encrypted. Currently sharedin supports AES-128 encryption using 16-octet keys. There are three ways in which encryption can be applied: using an existing key, using a randomly generated key, or using a new key that’s generated for every X number of video segments. The more video segments that have unique encyrption, the greater the overhead and the less the performance. Keys can be served over SSL for an added layer of encryption.

Device and OS Compatibility

All iOS devices running 3.0 and later support sharedin


Android 4.0 (Ice Cream Sandwich).

Android 4.1+ (Jelly Bean)

Android 4.4+ (Kit Kat)

Most new top-of-the-line Androids now support High Profile level

Desktop shardin Support

all  web browser has native support

OTT- Over-the-top Video Devices

Most current OTT devices support shardin tech. Like HLS, OTT devices prefer transmitting data over HTTP, which makes the two technologies a great fit as sharedin tech is also delivered via HTTP. Some of the top OTT devices with support for sharedin tech are as follows: .Apple TV

  • Roku 3
  • D-Link MovieNite Plus
  • Boxee Cloud DVR


sharedin tech are made for internet radio to get good quality of audio signal without cutting or buffering at all bandwidth from gsm mobile traditional bandwidth to broadband and that tech has alot for development and enhancement and the development of sharedin tech  only in the stream sigmenter and the Clint player that all not success without HLS technology . sharedin add new stone at the pyramid of HLS

presnted by

Dr. ibrahm noshokaty

april 2015

Optimization WordPress Plugins & Solutions by W3 EDGE