Tuesday, August 14, 2012


What the Jumper settings will do ?
 Jumper is used to make short circuites between two or more pins.
One of the real usage is Primary and secondary hard disk is configured via jumper settings.


How CPU usage is calculated ?
1.Assume if the CPU speed is 1KHz then it will executes 1000 cycles/second [/read 1000 bits per second]
   
  If we are doing program which requires 500 cycles/ second means then CPU usage is 50%. Rest of the 50 % can be used for other
process or program.
 
  if the program is more intensive and takes 1000 cycles/second to execute, then no more process can run that time. So CPU is so busy in executing the program.
For any external event/program it will respond slowly.

 Normally CPU speed is measured in GHz nowadays.

1KHz = 1000 Hz
1MHz = 1000 KHz
1GHz = 1000 MHz

Sunday, August 12, 2012

indian companies in jeans/shirts/pants

List of indian companies in jeans product:

Indian companies in jeans:

1.Flying machine
2.Shoppers stop
3.Mufti -available in shoppers stop,central,westside, globus
- available in Shoppers stop
- range from Rs 1000
4.New Port -Arvind mills india
5.Excalibur- Arvind mills india
6.integriti jeans
7.Spykar jeans
8.Numero uno
9.K-Lounge
10.lawman pg3
11.Bare denim -Pantaloon group
12.Killer jeans - kewal clothing ltd
13.Trigger jeans
14. DJ & C jeans [from Big Bazaar]

Indian companies in Shirts & Pants:
1.Blackberry's
2.Indian terrain[From chennai]
3.Raymond
4.Zodiac
5.Grasim
6.Qwalior
7.Color Plus [is from Raymond]
8.Pantaloon[their own brand clothes mostly]
9.Shoppers Stop [their own brand clothes mostly]
10.globus


if you know any indian company in shirts/jeans business, Please replay
& add it too.

Thursday, August 02, 2012


Why YUV format is used in MPEG2/MPEG4/H264encoders for compression ?

  MPEG2/MPEG4/H264 is lossy compression.
RGGB will have image details/color information in all the RGB represented bytes.
if we are applying for lossy compression, there is a chance for losing bits information.
if LSB bits are affected there wont be a big change in image. If MSB bits of RGB is lost,
this will be bigger change in image and user can perceive/can observe the changes in image.

  In YUV format, image details will be available in Y data[ this will have monochrome/ black and white image]. U and V component is used to store color information.

  Eyes can't perceive color information changes than image information. But it can easily observe the changes in image information[Y data which will have black/white image]. So In lossy compression, encoders wont disturb the image information & will
disturb or compress more in color information [UV components]

     If we are using RGB, we cannot separate image information & color information.Because all the pixels will have the image information and color information.

This is the reason why we are using YUV format in lossy compression or MPEG2/MPEG4/H264 encoders.


   

Friday, July 27, 2012

Dalvik Debug Monitoring System:



DDMS is integrated into Eclipse and is also shipped in the tools/ directory of the SDK. DDMS works with both the emulator and a connected device. If both are connected and running simultaneously, DDMS defaults to the emulator.
  • From Eclipse: Click Window > Open Perspective > Other... > DDMS.
  • From the command line: Type ddms (or ./ddms on Mac/Linux) from thetools/ directory

    How DDMS Interacts with a Debugger

    When DDMS starts, it connects to adb. When a device is connected, a VM monitoring service is created between adb and DDMS, which notifies DDMS when a VM on the device is started or terminated. Once a VM is running, DDMS retrieves the the VM's process ID (pid), via adb, and opens a connection to the VM's debugger, through the adb daemon (adbd) on the device. DDMS can now talk to the VM using a custom wire protocol

    DDMS features summary:/when we will use DDMS ?
    For the following purpose, we can use DDMS.
    1.Viewing heap usage for a process
    2.Tracking memory allocation of objects
    3.Working with emulator or device's file system 
        we can copy video files/files to emulator or device with DDMS
    4.using logcat : logs can be viewed and filtered based on LOG_TAGS
    5.monitoring network traffic information






Changes/Features in android  Icecream sandwich:

1.RTSP streaming was integrated with NuPlayer
2.Valgrind  static memory leak tool is ported to android
3.gdb is ported to android
4.OpenCV image processing library is ported to android that is why facedetection / recognition is possible in
ICS . Samsung S III ICS is having face detection functionality
5.OpenAL application layer support  [refer system/media/wilhelm folder path]
6.Subtitle support through TimedTextPlayer for video/audio playback
7.ION memmory manager support in ICS
[ ION is a generalized memory manager that Google introduced in the Android 4.0 ICS (Ice Cream Sandwich) release to address the issue of fragmented memory management interfaces across different Android devices. There are at least three, probably more, PMEM-like interfaces. On Android devices using NVIDIA Tegra, there is "NVMAP"; on Android devices using TI OMAP, there is "CMEM"; and on Android devices using Qualcomm MSM, there is "PMEM" . All three SoC vendors are in the process of switching to ION].
More about ION memory can be found : http://lwn.net/Articles/480055/
8.Framework for DRM
9.NuPlayer is introduced for HTTP live streaming in Honeycomb tablet. In ICS,  RTSP stack/streaming is removed from stagefright and integrated into NuPlayer
10. There are two ways to integrate the decoders in gingerbread
        i) Software decoder path [google decoders are integrated by deriving from MediaSource.InstantiateSoftwareDecoder() function is used within OMXCodec.
        ii) OMX path
       
  From ICS onwards, Google decoders are integrated in OMX Path and InstantiateSoftwareDecoder() is removed in stagefright




How will you identify the memory leaks in android ?
1.We can use valgrind tool to find memory leaks, stack corruption, array bound overflows etc
2 Valgrind tool is ported in Icecream sandwich onwards
3.From DDMS, we can select the process and start profiling, we have to do suspected memory leak operation,some file will be created.we can use eclipse memory analyzer tool to identify the memory leaks from this file
4.We can identify the memory leak by process. By putting ps command it will list down the memory occupied by process. Same can be viewed in settings option also. 


      If we want to check whether the video playback has memory leak or not,we have to type ps command to identify the memory occupied by mediaserver process. we can run the video continuously for 100 times or 200 times. Afterwards we have to check memory again for mediaserver process in ps command. if there is a great increase in memory, then we can confirm there is a memoryleak in a video playback. 


What is missing in Android Stagefright/NuPlayer RTSP streaming ?

1.Jitter buffer handling
2.RTCP handling
3.Error correction and feedback through RTCP Sender report/receiver report

What are all the buffering mechanism available in OMX ?
   1.AllocateBuffer - Allocates buffer
   2.UseBuffer - use the buffer given by the downstream component. Example using the video renderer's memory to fill the decoded data

When the input buffer or output buffer of the component should be released?
        Once the OMX input buffer is used and ready to release,  it will return the EMPTY_BUFFER_DONE event to the stagefright or who is owning the buffer. Once the IL client received the EMPTY_BUFFER_DONE event, it can release the input buffer.
     OMX  IL client will receives the FILL_BUFFER_DONE event from OMX component once the output buffer is used and ready to release. Once IL client received the FILL_BUFFER_DONE event, IL client will frees the output buffer.

How Seek operation is executed in Stagefright ?
        Once the application is giving seek, Thru stagefright player, awesome player calls the seek() function.

How seek operation is notified to the parser or Extractor ?
     within Awesomeplayer, SeekTo() fn is called, it will sets the ReadOptions structure values as seek mode set and seek timestamp is also stored in this structure.

     Awesomeplayer while calling decoder, it will  invoke decoder as below

            mVideoSource->read(&decodedBuffer, &readOptions);

This readoptions value is passed to decoder and decoder will have the pointer to the MPEG4/MKV extractor's media source.
       Decoder will invoke read() fn of the audio/video source of the extractor by passing readOptions.

within parser's media source ->read() fn, it will checks whether the seek mode is set or not. if it is set, it will
do seek to seektimestamp through parser.


                 

         

 

     

Wednesday, July 25, 2012

what is OMX Content Pipe  ?

From OMX IL 1.1 standard, we can integrate source[3GP parsers] or Streaming source /sink [MP4 muxer].

Through content pipe, it is possible for us to integrate the source/sink components.

Content Pipe specification explains on how it open/read/write positions from storage/ from network.

Friday, May 18, 2012

STL & GNU C++ with codeblocks

If we are using STL program in codeblocks, we have to enable GNU c++ option
by selecting

Project->Build options->Have g++ follow the coming C++ 0x ISO C++
language standard [-std =c++0x]

Otherwise it wont compile in codeblocks editor. while submitting the
same code in codeforces.com, we have to select
GNU C++0x4 option for uploading the cpp file.

BHTML & BCSS codeforces problem log

BHTML & BCSS codeforces problem log:

What are all the steps I have done to solve this problem:


For a problem, I started by generating the strings and tried to
compare the strings as follows:

<a><b><b></b></b></a>

Generated string [from HTML] is as follows:
a
a b
a b b

pattern a b has occurence as 2 times.

Pattern should be there in the generated string.

we need to check all the patterns and count/increment the current occurence.

This approach is taking more time, I got "timeout exceeded".

Once again I got failed in substring test case.

for example, <sundar/>
for pattern :sun it should return 0. But it is returning some other values
I stored the values in hashtable and added the number in an array.
instead of comparing strings, I started comparing
integer values.



1)timeout exceeded
2)substring testcase failure <sundar/> query:sun but output is 1. it should be 0
3)string comparison takes time, I added the string in hashtable and
generated the int array to compare & reduce
time
4) Integer comparison also takes around 7 seconds for a problem &
failed in somecases giving improper output.
5) I checked the case of seulgi kim's code. Based on my understanding,
I have created
Nary trees from the HTML. From Nary trees, we will search for the
given query recursively.

6.Through debugging found that PushToStack should be available for
<tag/> cases too
7.Modified the PreOrder() fn return as Long
8.modified the readQuery() to read char by char
9.deleted the query array and created it for each query
10.Tried by changing the stack size limit.
#pragma comment(linker, "/STACK: 2000000")

11. I doubted the readQuery() but I am getting large size input for the query.
I changed the Maximum Query characters Count[MaxQueryCount] from 250
to 4000. Now I have posted it, it was working fine & accepted in the
codeforces.
I thought like query line will have maximum 250 characters. I haven't
read properly about the problem sentences:

" Each query is a sequence x1, x2, ..., xn, where xi is the i-th
element of the query, and n (1 ≤ n ≤ 200) is the number of elements in
the query. The elements are separated by single spaces. Each query
doesn't begin with and doesn't end with a space. Each query element is
a sequence of lowercase Latin letters with length from 1 to 10."

200[queries] * 10 [tag size]* 199 spaces = 398000 characters it can
be...But since the testcases are having less than this limit, I am
able to run with 4000

12.Eventhough I successfully submitted the code in codeforces, I could
found one more problem... that is allocated nodes are not released
properly. I released all the nodes including parent nodes & resubmitted it...

Monday, May 14, 2012

Competition Sites to practice

1 UVA:http://acm.uva.es/p - The Valladolid University Online Judge.
Over N problems, for a reasonable value of N. The problems are culled
from old contests, and online contests.
2.ZJU:Zhejiang University Online Judge - http://acm.zju.edu.cn
3.SGU:Saratov State University Online Contester - http://acm.sgu.ru
4.PKU: Peking University Judge Online of ACM ICPC -
http://acm.pku.edu.cn/JudgeOnline
5.topcoder
6.codeforces

Wednesday, May 09, 2012

Books recommended in codechef

1. Standard book on Algorithms - Introduction to Algorithms by T H
Cormen, C E Leiserson, R L Rivest, C stein ( Famously known as CLRS )
2. Basic Algorithms - Algorithms by Richard Johnsonbaugh, Marcus Schaefer
3. Game Theory - Winning ways for your mathematical plays by Elwyn R.
Berlekamp, John H. Conway, Richard K. Guy
4. Programming challenges - Steven Skienna
5. Concrete Mathematics - Knuth
6. How to solve it by a computer - Dromey
7. Structure and interpretation of computer programs - Elsevier
8. Programming Language Pragmatics - Micheal Scott : It gives
comparative study of various programming languages and helps you
decide choose the appropriate ones on the basis of time they take to
process information

How P frame seek support will be added in any multimedia framework

Some multimedia frameworks will have support for P frame seek. Some
frameworks wont have this support. So How can we add this support ?

For P frame seek, we can not start directly start decoding & rendering
from the P frame;
Since P frame doesnt have entire frame information, if we start
decoding & rendering from P frame, it will shows
green patches on screen and affect the user experience.


How P frame seek is supported in Directshow multimedia framework ?


For any seek timestamp, Parser will seek the I frame timestamp before
the seek timestamp.
Ex :
I frame is available in 10th second.
if the seek is done to 12th second, the Parser will fetch I frame from
10th second and give it do the decoder.

Source Filter will set IMediaSample's SetPreroll() as true. The
decoder will decode the frame,get necessary information and wont give
it to the renderer.

Once the timestamp reached the seek timestamp, Parser will set the
setPreroll as false, then frame will starts rendering.


OpenCORE:

In OpenCORE's frame have an option to set DoNotRender flag. If we set
this flag, it wont render the given frame.

In Generic, In any multimedia framework,

if we are calling seek to any timestamp, Upto seektimestamp is
reached, video and audio frames are dropped without rendering on
hardware device.In case of audio, we will have more I frames. But if
we are rendering audio alone, video takes more time to catchup
video.User might be able to observe the weird behaviour since ear is
more sensitive than eyes. In this scenario, rendering the audio also
causes the audio clock to increase. It will cause AV sync issues once
video starts rendering.

In any multimedia framework, we have to do the following steps to
do P frame seek:



1.ParserseekTo(Nearest_I_Frame to SeekTimesamp);
2.ParserRead(frame);
3.Decode(decodedAudioVideoFrame);

4. if (decodedAudioVideoFrameTimestamp < SeekTimestamp)
{
release audioVideoFrame;
}
else
render(decodedAudioVideoFrame);

weird video behaviour in Stagefright

In stagefright, if any weird behaviour is observed in video. Following
steps need to be taken:

1.we need to print time taken by parser and decoder.[To check any time
taken process involved]
2.Check any frame is dropped by enabling log [dropping late frame in
awesomeplayer.cpp]
3.Print logs in all sleep() fn in AwesomePlayer[while rendering
frames, threads might go to background and wont do anything because of
Sleep() fn
4.Check for any postVideoEvent(10000ms) this might delay the rendering
of frames. For optimization or any other reason, chipset vendors might
tune this
value

String comparison

Whenever we need to compare strings, what we can use it?

Text: abc bcd bcd abc
Pattern: abc bcd
[Ex: BHTML & BCSS codeforces problem]

we need to compare text and pattern using strcmp. But naive string
comparison takes more time.

Solution 1)we can use RobinKarp or KMP algorithm to reduce time. or
else we can do the following way to avoid the problem.


How to avoid this:

we can use array to store the strings without duplication.

Array [][]={ "abc","bcd"}

Text Array: {0,1,1,0}
Pattern Array: {0,1}

then apply naive comparison in TextArray and pattern Array.

Solution 2)Compare to string array, int array comparison will takes less time.

Solution 3) if we are using array also, we need to compare all the
elements in array on worst case to add or lookup.
in this case, what we can do to improve this ?

We can use Binary Search Tree to store strings. if the given
string is less than the root string,
we need to search it from left child of the string.if it is greater
than or equal to the rootstring, we need to search the
right child of the root.

STL map is using RedBlack tree implementation. we can make
use of map to store strings.STL map can also be used for Hashtable.

Thursday, May 03, 2012

DecodeButDoNotRender

Insights on How to implement DecodeButDoNotRender flag in any
multimedia framework ?
This will happen when we dont have more I frames.

Wednesday, April 25, 2012

what is structure padding ?

what is structure padding ?

if the system is 32 bit system,
we are allocating memory like this:

struct test
{
char a;
int b;
};

It will allocate 8 bytes of memory in 32 bit system. For char,compiler
will allocate the 4 bytes. [char may be 1 or 2 bytes]
this is called data alignment. Data alignment means putting the data
at a memory offset equal to some multiple of the word size,
which increases the system's performance due to the way the CPU handles memory.
For char, 4 bytes are allocated. it will use only 1 bytes.Rest of the
bytes are filled with junk values. this is called structure padding.

For example, in 32 bit system,the data to be read should be at a
memory offset which is some multiple of 4.
if the data starts from the 18th byte, computer has to read two 4
bytes chunks to read the data.
if the data starts from multiple of 4[4,8,12,16,20], computer can read
4 byte chunk once to read the data.
If the read is from two virtual memory pages, then it will take more
time than expected.

In this way, padding improves performance.


if the data members are declared in descending order it will give
minimal wastage of bytes.

struct Test
{
char a;
int b;
char c;
};

sizeof(Test) is 12 bytes.

But if we declared like this, 8 bytes will be allocated for structure "Test".


struct Test
{

int b;
char c;
char a;
};

How to avoid structure paddding?
if we dont want to waste memory & tradeoff the performance,
we need to use pragma pack to align data.

Tuesday, April 24, 2012

How OMX core library is loaded for stagefright by chipset vendor like Qualcomm /Nvidia ?

How OMX core library is loaded for stagefright by chipset vendor like
Qualcomm /Nvidia ?

Qualcomm and NVIDIA processors will have built-in hardware codecs support.
They will provide the OMX components too. Their OMX core will be
loaded from libstagefrighthw.so .
This libstagefrighthw.so library will be loaded by stagefright in
OMXMaster.cpp in

addPlugin("libstagefrighthw.so");