Tuesday, November 18, 2008

Single Input mediasample and Multiple Output media Sample to be delivered from Transform Filter

Single Input mediasample and Multiple Output media Sample in Transform Filter:
===============================================================================
Solution :
---------

implement the Receive() fn:


HRESULT TransformFilter:: Receive(IMediaSample* pMediaSample)
{
HRESULT hr = S_OK;
do
{
hr = GetDeliveryBuffer(&pOutputSample);
if (FAILED(hr)) return hr;
hr = Transform(pSample,pOutputSample);
if(FAILED(hr)) return hr;

hr = m_pOutput->Deliver(pOutputSample);
pOutputSample->Release();
}while(nOutputSample != 0);

return hr;
}

Error while building PC based Dshow baseclasses:

Error while building PC based Dshow baseclasses:
==================================================

>D:\Program Files\Microsoft Visual Studio 8\VC\PlatformSDK\include\winnt.h(222) : error C2146: syntax error :

missing ';' before identifier 'PVOID64'
1>D:\Program Files\Microsoft Visual Studio
8\VC\PlatformSDK\include\winnt.h(222) : error C4430: missing type

specifier - int assumed. Note: C++ does not support default-int
1>D:\Program Files\Microsoft Visual Studio
8\VC\PlatformSDK\include\winnt.h(5940) : error C2146: syntax error :

missing ';' before identifier 'Buffer'
1>D:\Program Files\Microsoft Visual Studio
8\VC\PlatformSDK\include\winnt.h(5940) : error C4430: missing type

specifier - int assumed. Note: C++ does not support default-int
1>D:\Program Files\Microsoft Visual Studio
8\VC\PlatformSDK\include\winnt.h(5940) : error C4430: missing type

specifier - int assumed. Note: C++ does not support default-int

Solution:
-----------
include the following macro in the the Winnt.h before the following lines

typedef void *PVOID;
typedef void *POINTER_64 PVOID64;


After code change:
#define POINTER_64 __ptr64
typedef void *PVOID;
typedef void *POINTER_64 PVOID64;

AnotherSolution:
====================
Update BaseTsd.h with latest platform SDk file.

Wednesday, November 05, 2008

RenderStream Failure

RenderStream Failure :
========================
hr = CaptureGraphBuilder2->RenderStream(
NULL,NULL,mp4Source,mpeg4videoDecoder,Mpeg4Encoder);

returns the "No combination of intermediate filters found"

Observation:
if we connected the Mpeg1 Encoder that is connecting pins in the
RenderStream as follows:

hr = CaptureGraphBuilder2->RenderStream(
NULL,NULL,SourceFilter,mpeg1VideoDecoder,Mpeg4Encoder);


( Source FIlter might be AddSourceFilter() with URL).

RenderStream is running successfully.


Solution:
============


hr = CaptureGraphBuilder2->RenderStream(
NULL,&MEDIATYPE_Video,mp4Source,mpeg4videoDecoder,Mpeg4Encoder);

if we modified it as follows: it is working fine...

The Reason is RenderStream() media type is NULL means it will
directly connects the Source Filter's First Pin the Video Decoder
Filter.

In case of MPEG1 Source Filter, the First output pin is video so it
is running successfully without any problem.

In case of MP4 Source Filter failure case, First output pin is audio
Pin. So the RenderStream() tries to connect with MP4 Source Audio
output pin with Video decoder. So that causes the error.

if we specified the MediaType as video, RenderStream() will takes the
Source Filter's Video output pin and tries to connect it with video
decoder. So it is working fine...

Friday, October 17, 2008

Problem: Source Filter's Pause() fn was not called and directly Stop() fn was called


Problem;
----------------
  Source Filter's Pause() fn was not called and directly Stop() fn was called.
 
 
Solution steps:
------------------------
         For this problem I tried two approach.
 
          1.I have checked the source filter output pin's GetMediaType() fn and CheckMediaType() fn was succeeded and Decoder filter's CheckInputType() fn and CheckTransform() fn was succeeded. so no problem with pin connection
        2.I have developed the player application and rendered only the video output pin and still I got the same error
 
So there is no
            
 
 
Solution:
------------------
 
Importance of biPlanes in BITMAPINFOHEADER:
-------------------------------------------------------------------------

                In Source filter, we set the GetMediaType() returns media type.
This media type have BITMAPINFOHEADER; BITMAPINFOHEADER have biPlanes variable;
 if we set it to zero, then Source Filter directly goes to Filter's Stop() fn...
   if I modified it to 1, then the video is playing fine.
MSDN DSHOW docs prescribed this variable must be set to 1;
 
            So it is better to make the biPlanes  as 1 in media type returned with GetMediaType() fn in our Decoder.
 
 
 
 
 
 
 

how to find framerate of 3GP file?

 
 
how to find framerate of 3GP file? 
Is there any particular box present in 3gp file to specify framerate?
 
Solution:
--------------
      
3gp is actually QuickTime file. There is no special field for Frames Per Second, so you have to
calculate  by dividing the number of frames in stream and stream time
duration.
 

mDurationInSecs =  pMediaTrkInfo->trackDuration / lMovieInfo.timescale;

fFrameRate = pMediaTrkInfo->totalNumOfFrames/ (*apVidInfoLst)[index].mDurationInSecs;

//Frame Rate calculation if 10 frames r played in 2 secs, then FPS is 5 (10/5)

AvgTimePerFrame calculation

 
//FrameRate calculataion
header File is reftime.h
-----------------------------
const LONGLONG MILLISECONDS = (1000);            // 10 ^ 3
const LONGLONG NANOSECONDS = (1000000000);       // 10 ^ 9
const LONGLONG UNITS = (NANOSECONDS / 100);      // 10 ^ 7
//const REFERENCE_TIME FPS_25  = UNITS / 25;
//const REFERENCE_TIME FPS_30  = UNITS / 30;
//AvgTimePerFrame  = UNITS/frameRate;

 

VS 2005 Dshow Error PVOID64

 Error C2146: syntax error: missing ';' before identifier PVOID64

VS 2005 error with Dshow baseclasses.

C:\Program Files\Microsoft Visual Studio 8\VC\PlatformSDK\include\winnt.h(222) : error

C2146: syntax error : missing ';' before identifier 'PVOID64'


Solution:
--------------

The DirectX Include directory contains an early version of BaseTsd.h which does not

include the definition for POINTER_64. You should instead use the version of BaseTsd.h in

the Platform SDK, either the one that ships with Visual Studio 2005 (C:\Program

Files\Microsoft Visual Studio 8\VC\PlatformSDK\Include\BaseTsd.h) or in an updated

Microsoft SDK installation. In order to get the compiler to use the right file, just

remove BaseTsd.h from the DirectX Include directory.


 

Error

 
 
Error :
---------
AKY=00080001 PC=03f6dc24(coredll.dll+0x00021c24) RA=29b314b8(tcpip6.dll+0x000414b8)
BVA=390614b8 FSR=000000f5
and explain the scenario

Solution:
------------
 1.This is the memory based error. somewhere we are using the memory beyond its
boundary.
 I have allocated the 30 bytes of memory and try to do copy more than 40 bytes with
memcpy() fn. It causes the crash.
 

 

Difference between Normal file and PD streamable clip

Difference between Normal file and PD streamable clip:
----------------------------------------------------------------------------------
 
                   Normal video file can have the file index information at the end. ( video track audio track and its duration and all informations available @ the end. if it is so, we can play the file only after downloading the entire contents of a file.)
                 Progressive Download streamable clip will have the index information at the beginning.In case of 3gp file, moov atom will have the file index position and information .
              if the moov atom is available @ the end of the file, then the file is not streamable using PD.
     In case of rtsp streamable 3gp file also have some hint track @ the beginning to inform the file index positions @ the beginning.
                

Assumptions:
------------------

1.Quicktime, mp4,3gp file having hint track can be transferred thru http or ftp protocols
 
2.Moov atom index must be at the beginning of the file ... so that player can play the file before downloading the entire content...

3.Self contained movie can be streamable...

 

Solutions:
-----------
  Any Quicktime,mp4 or 3gp files can be able to do Progressive Download Streaming
if moov  atom is @ first..Moov atom index must be at the beginning of the file ... so that player can play the file before downloading the entire content.

 


 For Example


3gp file atoms are like this:
------------------------------
 ftyp
 moov
  mdat

 we can stream it in PD streaming.

if the 3gp atoms are as follows:

 ftyp
 mdat
 moov


 this indicates Moov atom is at the end of the file... So it can't be streamable in Progressive Download.

 

PD Stack:
-----------
 within PD stack,we will wait for the File Size and information about the video and audio track info...

 if it doesnt receive the video, audio track info and file information it will not proceed to download the content.  

 


  

Without calling Pause () fn we are not able to set seek position...

 


Without calling  Pause () fn we are not able to set seek position...
--------------------------------------------------------------------------

we are able to do the SetPosition ( set the seek position) on the fly while running the video.

    In our Source Filter we are not able to set the seek position. Before setting the seek position, we have to the

following:

        Pause() of IMediaControl;
 SetPosition();
 Run()

then only  we are able to set the seek position to the source filter.

 

Solution:
--------------
       The reason we are not able to do the seek position while running the video is we are not handling
setPosition properly.


  if Source Filter's output pin thread  exists , that represents the video is running; then

          i)call the stop on Source Filter's output pins .

  if(m_paStreams[i]->ThreadExists())
   {
     if(m_paStreams[i]->IsConnected())
     {
    hr = m_paStreams[i]->DeliverBeginFlush() ;
     }
    
     if(m_State != State_Stopped )
     {
      hr = m_paStreams[i]->Stop();
     }
         
     if(m_paStreams[i]->IsConnected())
     {
    hr = m_paStreams[i]->DeliverEndFlush() ;
     }  
   }
 

         ii)Set the seek position

 iii)call the Pause () fn on Each output pins of the source Filter


 
If the filter graph is stopped, the video renderer does not update the image after a seek operation. To the user, it will

appear as if the seek did not happen. To update the image, pause the graph after the seek operation. Pausing the graph cues a

new video frame for the video renderer. You can use the IMediaControl::StopWhenReady method, which pauses the graph and then

stops it.
 

when the Source filter output pin connects to Decoder ?

 
Source Filter:
-------------------------
For Media type, check the
1.GetMediaType() and CheckMediaType() succeeds then only the
Source Filter output pin will connects to any decoder filters

H.264 Nonstandard clip Crash issue in Quicktime 7.0.3

H.264 Nonstandard clip Crash issue in Quicktime 7.0.3 :
=========================================================

 i) Normally the width and height of the H.264 will be the multiple of 16.
 ii)Non standard clip means it will not be the multiple of 16,like 450x360

 iii)In Quicktime Player,Quicktime File parser will give original width and height
  ( 450x360) and the decoder allocated the output video width and height as

450 *360. This will leads to crash. Decoder must output the buffer as follows
464 * 364; (width and height are aligned as multiple of 16).

 In H264, we will encode the video frame with 100x100 and the width and height may

be the 112 * 112; the actual video width height is as 100,100.

 iv)if the decoder allocates only the 450* 360 output width the crash might happen

in decoder or the renderer which renders the data;

 Quicktime may solved this issue in latest version Quicktime 7.5. In older

version,they may allocate the output width and height as 450*360 and crashed.
 Our decoder too behaved in the same way.

 

 


 

 

RGB565 video renderer performance and how can we tackle it ?

RGB565 video renderer performance and how can we tackle it ?

Reason:
=========
 In case of RGB 565 scaling and rotation, for retrieving the R,G,B component,we
have to do calculation for Each and every pixel that will decreases the time taken.
RGB 565 format scaling and rotation is taking so much of time. But In case of the YV12 we are able to do scaling and rotation with twice a speed of the RGB565;
 
Solution:
============
        if we need more performance or effective execution do it as follows:
if they need RGB565 scaling and rotation,do the following:
             i)  convert the RGB 565 to YV12
      ii) Do YV12 scaling or rotation
      iii)Convert YV12 to RGB 565 back.
it will increase the performance;
 
 
 

Why WMP calls the IFileSourceFilter's Load fn Twice

Why WMP calls the IFileSourceFilter's Load fn Twice:
-----------------------------------------------------------

if we implemented the IFileSourceFilter in our Filter, WMP calls the IFileSourceFilter's Load() function twice;

Reason:
----------
 WMP calls "Load" two times, First time just to check whether the filter is proper dll or not
 WMP call "Load" again for the second time. Second time only  we have to create any pins to the Filter;
In case of Source Filter,we will be creating pins only at the second time;
      

WMP player is not loading the Filter DLL in Wifi

WMP player is not loading the Filter DLL in Wifi :
-------------------------------------------------------------------------

1.We enabled the Wifi in a mobile device;
Thru the Wifi, we try to reach the server for streaming server;

 We are giving the rtsp/PD server address in WMP;
In WMP, if we specified the  http  or rtsp, WMP must automatically load the
Filter.
         But it doesnt happen and our filter is not getting loaded;


Reason:
----------
 WMP player will not load the filter DLL before checking the host.

WMP Behavior:
-------------
 WMP will sends some form of HTTP GET / some requests to server

 For Example rtsp://10.203.92.78:8080/1.3gp

 WMP will sends the some network request to the the 10.203.92.78 server; if the

server acknowledges the WMP, then only WMP tries to load the filter DLL for the

corresponding format;

 In case of the Wifi, it may sends the HTTP Get command. Wifi server may not

response to the request or some form of communication gap happens.. so WMP will not get

the response;


  But In case of GPRS streaming,WMP loads the filter DLL and we are able to do streaming well;
 
Another reason WMP may not be able to detect the connected network intelligently;


Solution:
==========
      Develop our own player application to insert the Streaming source Filter and render

it...
 


Note:
------- 
  if our Player application is using RenderFile() means that might cause the same problem;


 

 

WinCE macro and Log Location

WinCE macro:
----------------------
 
#ifndef WINCE
 fp  = fopen("C:\\Test2ByteAligned.Dump","wb");
#else
 fp = fopen("\\My Documents\\Test2ByteAligned.Dump","wb");

RTSP Audio video Sync


Problem :
------------

 Issues in RTSP/LiveStream Audio and video sync

 RTSP video on Demand videos are playing fine. But Only Livestream audio and video

sync is not happening  Streaming server is sending RTSP as meta data channel( just for DESCRIBE,

PLAY,PAUSE and TEARDOWN commands);Streaming server will send video and audio data over the

RTP.

Analysis:
----------

 if there is any issue in audio and video sync issues in RTSP/Livestream RTSP,
it is because of the Streaming Server.

 

 rtsp streaming servers are specialized servers;They have to do audio and video

sync; In case of rtsp streaming, it is enough to render the audio and video data using the

RTP timestamp. there is no need to do audio video sync with RTCP.if we are doing RTCP

sync, that will be better;
 But For RTCP sync timestamp calculation it will takes much time because of

floating operations involved in it. So In case of low end devices such as mobile they will

not use the RTCP for sync; it is enough to do render the video and audio with RTP

timestamp;


 
Solution:
----------
 Make the Streaming server to give output as video and audio synchronized;
 
  


 

Tuesday, September 09, 2008

IE internet browsing problem in Windows Mobile

IE internet browsing problem in Windows Mobile:

---------------------------------------------------------------------

1.Even though I configured the proxy server and USB setting as Turbo Mode.

I am not able to open any web pages in a mobile.

Solution:

----------

I have opened Active Sync's connection Settings in PC and selected the automatic from the list of options in combo box.

The computer is connected to:

Automatic

Work Network

Internet

it was "Work Network" previously. I modified it to the "Automatic" in a PC then the web pages are able to open in IE.

Monday, September 08, 2008

WinCE Deployment error Deployment and/or registration failed with error

WinCE Deployment error:

------------------------------------------

Deployment and/or registration failed with error: 0x8973190e. Error writing file '\Program Files\VideoRenderer.dll'. Error 0x80450001: (null)

========== Build: 0 succeeded, 0 failed, 3 up-to-date, 0 skipped ==========

========== Deploy: 2 succeeded, 1 failed, 0 skipped ==========

This is due to I have deployed the DLL in Windows folder But the mobile is not allowing to deploy...

So I prepared the cab ( will have the DLL) and install it in mobile.

But it is also not working...

  The problem is Bulverde device shown error as the Problem occurred in tmarshaller.exe file.

Reason:

----------------

Previously I got the same error when Codec DLL is missing in the Codec Filter.

if I deployed it, then I got the error...

 

I have opened the file

using fopen() fn in Constructor of a video renderer... It causes the problem.

if I opened the file in SetMediaType() fn I didnt get any errors...

So Beware of Fopen() calls...

 

 

I have declared the public variable in CVideoRenderer and opened the fopen("\\My Documents\\test.log"); in the CVideoRenderer's constructor...

CVideoRenderer's constructor will be called in the CreateInstance of IUnknown implementation of a filter...

 

This is the General problem to any COM DLL.

Tuesday, September 02, 2008

How to configure the VS 2005 wizard to create the .rel file:


How to configure the VS 2005 wizard to create the .rel file:
----------------------------------------------------------------------------------------
Projects-> Properties->Linker ->CommandLine

 /subsystem:windowsce,5.01 /machine:THUMB /savebaserelocations:"$(TargetDir)$(TargetName).rel"

Monday, September 01, 2008

The difference between SmartPhone and Pocket PC?



---------- Forwarded message ----------
From: sundar rajan <sundararajan.svks@gmail.com>
Date: Sep 1, 2008 4:19 PM
Subject: Re: Anyone knows the difference between SmartPhone and Pocket PC?
To: ootaboys@googlegroups.com



 Windows mobile is a general term for Windows OS that are used in handheld devices.

For Example Windows OS is a general term... Under this category we are having different flavors like Windows 98,windows XP,

Windows NT and so on.
 

Windows Mobile  operating system for handhelds:


 1.Pocket PC Edition
 2.Pocket PC Phone Edition, and
 3.Smartphone Edition

Pocket PC Edition:
--------------------
 Stand-alone PDAs, such as the Dell Axim X51 series and the HP iPaq rx1950, use Pocket PC Edition and come with the

full Mobile Office suite, including Word Mobile, Excel Mobile,and PowerPoint Mobile. These handhelds typically have

240x320-pixel touch screens, can feature wireless connectivity, and are best for those who want a palm-size device to

organize vital information with the option to work on the go and surf the Web.
 ( it is like a handheld PC )

Pocket PC Phone Edition:
------------------------
 Devices that run Pocket PC Phone Edition are similar to Pocket PC PDAs in shape and size (including the touch-screen

functionality), with the full suite of office apps, but they add cellular-wireless capabilities so that you can make phone

calls. These all-in-one mobiles are good for power users who need the full functionality of being able to work and stay

connected on the road. Examples of Pocket PC phones are the Palm Treo 700w.

SmartPhone:
-------------
 Windows Mobile  Smartphone Edition offers the biggest difference of the three versions. First, the mobiles are

smaller, resembling cell phones, and they generally feature 320x240-pixel displays that aren't touch sensitive. Instead, you

navigate via soft keys and a joystick. Also, while you get all the calendar and contact tools, you don't get the entire

Office Mobile suite, just Outlook Mobile; third-party apps, such as Westtek's ClearVue Suite, are available so that you can

view work documents. These types of smart phones are perfect for users who want the phone form factor but also want to stay

up-to-date and be more productive on the road.

 

Major Difference between Smartphone  and Pocket PC:

 SmartPhone is a low cost solution compare to Pocket PC.

Touch screen facility available in Pocket PC.Smartphone will not have touch screen facility.we have to navigate via softkeys or joystick.

 That is it...

 
 

 

 

Thursday, August 28, 2008

fatal error LNK1181: cannot open input file

Linker Error
--------------
MS_LINK : fatal error LNK1181: cannot open input file '..\..\..\..\..\3gpstack\pd_streaming_lib\prj\lib_ppc2003_evc4\windows mobile 5.0 smartphone sdk (armv4i)\debug\etm_pd_stack_lib_ppc2003_evc4.lib'

Solution:
----------
The path of the library file "etm_pd_stack_lib_ppc2003_evc4.lib'" is too large.
It is less than the 255 characters but still it is the problem. if I copied my project directly to D:\ then I will not get any errors.
MSDN KB for LNK1181 Error:
--------------------------------------
SYMPTOMS
When you build a Microsoft Visual C++ .NET project in Microsoft Visual Studio .NET or a Microsoft Visual C++ 2005 project in Microsoft Visual Studio 2005, you may receive an error message that is similar to the following error message:
fatal error LNK1181: cannot open inputfile ‘f:\temp\test.obj’
Back to the top
CAUSE
This problem may occur when the path of the Intermediate file folder or the path of the Output file folder in the Visual C++ project starts with a leading backward slash (\). The IDE uses the drive from where it is launched. The IDE does not use the drive that the project uses.
Back to the top
WORKAROUND
To work around this problem, include the drive name for the Intermediate file folder and for the Output file folder. To edit the entries for the IntDir folder or for the OutDir folder, follow these steps:1. On the Project menu, click Properties Pages.
2. In the Properties Pages dialog box, click Configuration Properties.
3. In the General list, type new entries for the Output Directory items and for the Intermediate Directory items that include the drive names.

reference: http://support.microsoft.com/kb/839286

Friday, August 22, 2008

How to connect the Windows Pocket PC(Not having SIM) to the Internet thru ActiveSync

How to connect the Windows Pocket PC(Not having SIM) to the Internet thru ActiveSync:
------------------------------------------------------------------------------------
Two ways to connect our Development mobile to internet;
1.Thru the ActiveSync connected System
2.Thru the LAN settings

Development mobile will not have any SIM card in it;


Thru Active Sync Connected System:
-----------------------------------------------------------
To Select this mode we have to keep in mind the following things:
1.we must configure the Proxy server to the IE settings
2.USB Setting must be Turbo Mode;
How to Configure it:
----------------------------
Select Settings->Connections->connections->Tasks-> Set up Proxy server
general tab
Enter the name for the Settings: "Internet"
Proxy settings Tab:
-------------------------------
Enable the check boxes
1.this network connects to the internet option and
2.This network uses a proxy server to connect to the internet and enter the Proxy server address.
ProxyServer: 10.202.202.20
and click on OK.
Settings->Connections->connections has two tabs...
1.Tasks
2.Advanced
In the Advanced tab, click on "Select Networks" button.
Network Management page will appears.it has two combo boxes.
Select the "Internet" (description given in General Settings ) in both the combo boxes.
and then click on OK.

Moreover we have to set the "USB setting" mode as Turbo Mode;
Select the following option button:
Settings->connections->USB setting-> ActiveSync(Turbo Mode)

Restart the device to make the new Active Sync connection.
Now check the Internet explorer with URL.

LAN Based internet connection:
------------------------------------------------
Thru LAN based connection, we can access the LAN ( Intranet websites also) sites.


To Select this mode we have to keep in mind the following things:
1.we must configure the Proxy server to the IE settings
2.USB Setting must be Turbo Mode;
How to Configure it:
------------------------------
Select Settings->Connections->connections->Tasks-> Set up Proxy server or Edit my proxy server
general tab
Enter the name for the Settings: "Internet"
Proxy settings Tab:
-------------------------------------
Uncheck the following check boxes
1.this network connects to the internet option and
2.This network uses a proxy server to connect to the internet and enter the Proxy server address.
ProxyServer: 10.202.202.20
and click on OK.
Settings->Connections->connections has two tabs...
1.Tasks
2.Advanced
In the Advanced tab, click on "Select Networks" button.
Network Management page will appears.it has two combo boxes.
Select the "Internet" (description given in General Settings ) in both the combo boxes.
and then click on OK.

Moreover we have to set the "USB setting" mode as Normal Mode;
Select the following option button:
Settings->connections->USB setting-> ActiveSync(NormalTurbo

Restart the device to make the new Active Sync connection.
Now check the Internet explorer with URL.

Wednesday, August 06, 2008

Is it possible to call the function available in .obj code?

Is it possible to call the function available in .obj code?
Solution:
===========
yes.
But We can use it only in C file;In C++, we need further information;
I have developed the DLL project;
Within c file, We are able to call the function in obj code;
So within that DLL project, I have added the following
int ScaleRGB(int Width,int Height); available in object code;

Header.c:
--------
void ScaleRGB_Wrapper(int Width,int height)
{
ScaleRGB(width,height);
}
we have to add the .obj file to our DLL project;
within C code , call this ScaleRGB() fn and export the ScaleRGB_Wrapper() fn
as follows:
TESTDLL_API void ScaleRGB_Wrapper(int Width,int height)
{
ScaleRGB(width,height);
}
From the DLL client C++ application, call the ScaleRGB_Wrapper() function in a DLL;

Monday, August 04, 2008

Debug Mode and Release mode differences

Debug Mode and Release mode differences:
------------------------------------------------
Always test the application in both debug and release mode as a habit;

In debug mode, many of the compiler optimizations are turned off which allow the generated executable match up with the code. This allows breakpoints to be set accurately and allows a programmer to step through the code one line at a time. Debugging information is also generated help the debugger figure out where it is in the source code.
In release mode, most of the compiler's optimizations are turned on. Chunks of your code could be completely deleted, removed, or rewritten. The resulting executable will most likely not match up with your written code. However, normally release mode will run faster then debug mode due to the optimizations.

The biggest difference between these is that: In a debug build the complete symbolic debug information is emitted to help while debugging applications and also the code optimization is not taken into account. While in release build the symbolic debug info is not emitted and the code execution is optimized.Also, because the symbolic info is not emitted in a release build, the size of the final executable is lesser than a debug executable.
Note:
--------
One can expect to see funny errors in release builds due to compiler optimizations or differences in memory layout or initialization. These are ususally referred to as Release - Only bugs :)In terms of execution speed, a release executable will execute faster for sure, but not always will this different be significant.
Q:Why does program work in debug mode, but fail in release mode?A: First of all, there is no such thing as 'debug mode' or 'release mode'. The VC++ IDE offers the possibility to define configurations which include a set of project settings (like compiler / linker options, output directories etc.) When a project is created using AppWizard, you get two default configurations: "Win32 Debug" and "Win32 Release". These are just convenient starter configurations with several preset options which are suitable for typical debug builds or release builds respectively, but you are by no means restricted to those settings. Actually, you can modify those configurations, delete them, or create new ones. Now let's see what the two default configurations typically include and what distinguishes them:Win32 Debug:Subdirectory 'Debug' used for temporary and output filesPreprocessor symbol _DEBUG definedDebug version of the runtime libraries is usedAll compiler optimizations turned offGenerate debug infoWin32 Release:Subdirectory 'Release' used for temporary and output filesPreprocessor symbol NDEBUG definedRelease version of the runtime libraries is usedVarious compiler optimizations turned onGenerate no debug infoThere are a few other differences, but these are the most important ones. Now, what's the first implication of all this? That, as opposed to a common misunderstanding, you can debug a release build. Just go to 'Project -> Settings', choose the Win32 Release configuration, tab 'C/C++', 'General' and set 'Debug Info' to 'Program Database'. Then go to the tab 'Linker', and turn on 'Generate Debug Info'. If you rebuild your project now, you will be able to run it in the debugger. Regardless of whether your program crashes or just doesn't behave as expected, running it in the debugger will show you why. Note however, that due to optimizations turned on in the release build, the instruction pointer will sometimes be off by a few code lines, or even skip lines altogether (as the optimizer didn't generate code for them). This shouldn't be a concern, if it is, turn off optimizations.When debugging your release build this way, you will probably discover that at a certain point during execution, a variable has a different value in the release and in the debug build, causing the differing behaviour. And if you go back and see where the value of that variable is set, you will most probably find out that it isn't: You simply forgot to initialize that variable. The reason why the debug build seemed to work is that the debug version of the runtime library initializes dynamic memory and stack variables to known values (in order to track down memory allocation and overwrite errors), while the release version of the runtime library doesn't. This is by far the most frequent single cause for different behaviour between debug and release builds, so chances are good that this fixes your problem (and for the future, remember to always initialize your variables).If uninitialized variables were not the cause of your problem, let's look at the next possible difference between debug and release builds: The preprocessor symbols _DEBUG and NDEBUG. If you have any code inside an #ifdef _DEBUG / #endif block, it will not be contained in a release build. What's worse, the dependency of those symbols can be hidden inside other macros. A typical candidate for this is ASSERT: It expands to the assertion testing code if _DEBUG is defined, and to nothing if it is not. Therefore, be careful to have no code with side effects inside an ASSERT macro. For example, the following code will work in a debug build, but fail in a release build:CSomeDialog dlg;ASSERT(dlg.Create(IDD_SOME_DLG));dlg.ShowWindow(SW_SHOW);As a rule, never put code which needs to be executed inside an ASSERT. (A side note: Conditions which can be expected to fail at runtime, like the 'Create()' call in the example, should never be tested with ASSERTs anyway. Assertions are a tool to assert pre- and postconditions regarding your code, not runtime error conditions.)At this point, you have most probably found out why your code failed in the release build. If not, this might be one of the very rare cases where the compiler optimizations caused your code to behave differently (the VC++ compiler had several optimizer bugs in the past, and I doubt they have all been fixed). To exclude this, first turn all the optimizations off (Project -> Settings, tab 'C/C++', category 'Optimizations', option 'Disable (Debug)'). If your code works now, selectively turn optimization options on until you found the culprit. Simply leave it turned off, or upgrade to a newer version of the compiler (or install the most recent service packs) which might hopefully fix that bug.This should help you get your release build running in most of the situations. For a more in-depth discussion about the differences between debug and release builds, see the excellent article Surviving the Release Version
http://www.codeproject.com/KB/debug/survivereleasever.aspx
Incorrectly prototyped message handlers !Thats another big goof up everyone makes atleast once ...In the debug build, if you have incorrect message handler signatures this doesnt cause any problems.But MFC does a couple of naughty type casts in the message map macros. So when you build the same code in release mode, you are guranteed to run into trouble. As soon as your message handler returns, your stack frame gets trashed and you get a neat little GPF.
Some References:
http://www.codeproject.com/KB/debug/survivereleasever.aspx
http://www.codeproject.com/KB/debug/releasemode.aspx

Local variable value changed only in release mode not in debug mode

I faced the problem like the following:
Local variable values are changed after the particular moment;The expected value is not placed in a variable;
Even though we are not changing the value;I faced this problem many times;But thought of it as a multi threading error;
Nature of Error:
---------------
This error is occured only in release mode not in debug mode;
and I have used the more local variables in a function;
Fix:
-------
I fixed the problem after calling the certain function the local variable value is changed;
void Test()
{
//variables ...
//....
//...
int i = 0;

i = 2;
InnerFn();
printf( i); // i value is around 158004
//Unknown reason
}
Solution:
--------------
it is not a multi threading error; Mostly the multi threading error affects only the global variables and class private or public members not a local variables;For Every Thread call, the stack variable is allocated and freed;
Problem is there is some memory leak in InnerFn() ; So it affects the local variables of a Test() fn;
After modified the memory leak problem in InnerFn(), the local variable value is not changed;the local variable have the expected value;

How to replace our video renderer with Microsoft's video renderer?

if we are developing our own renderer, then we have to make the media player to use our own
video renderer filter. what we have to do for it?
[HKEY_CLASSES_ROOT\CLSID\{70e102b0-5556-11ce-97c0-00aa0055595a}]
@="Video Renderer"
"Merit"=dword: 0x800000
and check its merit ;

Make our own Video Renderer filter's merit as follows
0x800000 + 1
Then Windows media player make use of our own video renderer filter;
Dshow loads filters based on merit; if the merit is high, then it will be preferred by the windows media player as well .
as Intelligent connect for connecting filters in a filtergraph.

Tuesday, July 29, 2008

Application Crashing Problem in Release version in VC++ with Map File

Application Crashing Problem in Release version in VC++:
-------------------------------------------------------------------------


Crash is as follows:

Data Abort: Thread=8dac8ba4 Proc=8037c870 'TestApp.exe'
AKY=ffffffff PC=78a3e118(image_proc.dll+0x0001e118) RA=00013178(StreamingPlayer.exe+0x00003178) BVA=241dc35e FSR=00000807

Data Abort: Thread=8d89dc34 Proc=8037c870 'TestApp.exe'

AKY=ffffffff PC=78a3e118(image_proc.dll+0x0001e118) RA=00013178(StreamingPlayer.exe+0x00003178) BVA=241dc35e FSR=00000807

Unhandled exception at 0x78a3e118 in TestApp.exe: 0xC0000005: Access violation writing location 0x001dc35e.

Data Abort: Thread=8d739000 Proc=8037c960 'wmplayer.exe'

AKY=ffffffff PC=7894e118(imageLib.dll+0x0001e118) RA=00013178(wmplayer.exe+0x00003178) BVA=260ec35e FSR=00000807








if we are building our application in release version, at client side it crashed with some address;
How can we trace which function is failed and locate where the error is ?


1.Solution for the above thing is to make use of .map file


.map file content sample:
---------------------------
imageLib
Timestamp is 488496e7 (Mon Jul 21 19:32:15 2008)
Preferred load address is 10000000


Start Length Name Class
0001:00000000 000004bcH gTest CODE
0002:00000000 00000e30H Addition CODE

Address Publics by Value Rva+Base Lib:Object

0000:00000000 ___safe_se_handler_table 00000000
0000:00000000 ___safe_se_handler_count 00000000
0001:00000000 RGB565toYUV420 10001000 ColorConversion.obj
0002:00000000 UYVYtoRGB565 10002000 ColorConversion.obj
0002:000003d4 Rotate 100023d4 ImageOps.obj
0002:00000660 Blur 10002660 ImageOps.obj


Static symbols

0010:0000417c Init 1001517c f InitOps.obj






Crash will throw some address which caused the problem;




There are two types of crash;Crash may be in executables or may be in DLLs;



Solution for Crash in Executable Application:
------------------------------------------

Example:
---------
AKY=ffffffff PC=7894e118(TestApp.exe 0x1001e118) Data abort

if it is executables, the Solution is as follows:


0x1001e118 - is a Crash Address;


Function Address = Crash Address - Preferred Load Address in Map file - 0x1000 ( For Portable Executable File Format Infos) ;


Solution for Crash in DLL:
-----------------------------


AKY=ffffffff PC=7894e118(imageLib.dll+0x0002478) RA=00013178(wmplayer.exe+0x00003178)


ImageLib.dll + 0x0001e118 means

Crash Address = Preferred Load address in Map File + Crash Address;
= 10000000 + 2478
= 10002478 This Crash Address is in between Rotate and Blur Function;
So the crash is in Rotate function;





Preferred load address is 10000000
0002:000003d4 Rotate 100023d4 ImageOps.obj
0002:00000660 Blur 10002660 ImageOps.obj


Note:
--------
100023d4 - Starting Address of the Rotate() fn
( Rotate fn code occupies memory range from 100023d4 to 1000265F )
10002660 - Starting Address of the Blur() fn



Creation of a Map File in Vs2005:
--------------------------------------------------

At the time of building the project, if we specified this macro

/MAP[:file name] will leads to create the map file ;




/MAP (Generate Mapfile)


/MAP[:filename]
Remarks
where:

filename
A user-specified name for the mapfile. It replaces the default name.

Remarks
The /MAP option tells the linker to create a mapfile.

By default, the linker names the mapfile with the base name of the program and the extension .map. The optional filename allows you to override the default name for a mapfile.

A mapfile is a text file that contains the following information about the program being linked:

The module name, which is the base name of the file

The timestamp from the program file header (not from the file system)

A list of groups in the program, with each group's start address (as section:offset), length, group name, and class

A list of public symbols, with each address (as section:offset), symbol name, flat address, and .obj file where the symbol is defined

The entry point (as section:offset)

The /MAPINFO option specifies additional information to be included in the mapfile.

To set this linker option in the Visual Studio development environment
Open the project's Property Pages dialog box. For details, see Setting Visual C++ Project Properties.

Click the Linker folder.

Click the Debug property page.

Modify the Generate Map File property

Application Crashing Problem in Release version in VC++ with Map File

Application Crashing Problem in Release version in VC++:
-------------------------------------------------------------------------


Crash is as follows:

Data Abort: Thread=8dac8ba4 Proc=8037c870 'TestApp.exe'
AKY=ffffffff PC=78a3e118(image_proc.dll+0x0001e118) RA=00013178(StreamingPlayer.exe+0x00003178) BVA=241dc35e FSR=00000807

Data Abort: Thread=8d89dc34 Proc=8037c870 'TestApp.exe'

AKY=ffffffff PC=78a3e118(image_proc.dll+0x0001e118) RA=00013178(StreamingPlayer.exe+0x00003178) BVA=241dc35e FSR=00000807

Unhandled exception at 0x78a3e118 in TestApp.exe: 0xC0000005: Access violation writing location 0x001dc35e.

Data Abort: Thread=8d739000 Proc=8037c960 'wmplayer.exe'

AKY=ffffffff PC=7894e118(imageLib.dll+0x0001e118) RA=00013178(wmplayer.exe+0x00003178) BVA=260ec35e FSR=00000807








if we are building our application in release version, at client side it crashed with some address;
How can we trace which function is failed and locate where the error is ?


1.Solution for the above thing is to make use of .map file


.map file content sample:
---------------------------
imageLib
Timestamp is 488496e7 (Mon Jul 21 19:32:15 2008)
Preferred load address is 10000000


Start Length Name Class
0001:00000000 000004bcH gTest CODE
0002:00000000 00000e30H Addition CODE

Address Publics by Value Rva+Base Lib:Object

0000:00000000 ___safe_se_handler_table 00000000
0000:00000000 ___safe_se_handler_count 00000000
0001:00000000 RGB565toYUV420 10001000 ColorConversion.obj
0002:00000000 UYVYtoRGB565 10002000 ColorConversion.obj
0002:000003d4 Rotate 100023d4 ImageOps.obj
0002:00000660 Blur 10002660 ImageOps.obj


Static symbols

0010:0000417c Init 1001517c f InitOps.obj






Crash will throw some address which caused the problem;




There are two types of crash;Crash may be in executables or may be in DLLs;



Solution for Crash in Executable Application:
------------------------------------------

Example:
---------
AKY=ffffffff PC=7894e118(TestApp.exe 0x1001e118) Data abort

if it is executables, the Solution is as follows:


0x1001e118 - is a Crash Address;


Function Address = Crash Address - Preferred Load Address in Map file - 0x1000 ( For Portable Executable File Format Infos) ;


Solution for Crash in DLL:
-----------------------------


AKY=ffffffff PC=7894e118(imageLib.dll+0x0002478) RA=00013178(wmplayer.exe+0x00003178)


ImageLib.dll + 0x0001e118 means

Crash Address = Preferred Load address in Map File + Crash Address;
= 10000000 + 2478
= 10002478 This Crash Address is in between Rotate and Blur Function;
So the crash is in Rotate function;





Preferred load address is 10000000
0002:000003d4 Rotate 100023d4 ImageOps.obj
0002:00000660 Blur 10002660 ImageOps.obj


Note:
--------
100023d4 - Starting Address of the Rotate() fn
( Rotate fn code occupies memory range from 100023d4 to 1000265F )
10002660 - Starting Address of the Blur() fn



Creation of a Map File in Vs2005:
--------------------------------------------------

At the time of building the project, if we specified this macro

/MAP[:file name] will leads to create the map file ;




/MAP (Generate Mapfile)


/MAP[:filename]
Remarks
where:

filename
A user-specified name for the mapfile. It replaces the default name.

Remarks
The /MAP option tells the linker to create a mapfile.

By default, the linker names the mapfile with the base name of the program and the extension .map. The optional filename allows you to override the default name for a mapfile.

A mapfile is a text file that contains the following information about the program being linked:

The module name, which is the base name of the file

The timestamp from the program file header (not from the file system)

A list of groups in the program, with each group's start address (as section:offset), length, group name, and class

A list of public symbols, with each address (as section:offset), symbol name, flat address, and .obj file where the symbol is defined

The entry point (as section:offset)

The /MAPINFO option specifies additional information to be included in the mapfile.

To set this linker option in the Visual Studio development environment
Open the project's Property Pages dialog box. For details, see Setting Visual C++ Project Properties.

Click the Linker folder.

Click the Debug property page.

Modify the Generate Map File property


For More Info:
http://www.codeproject.com/KB/debug/mapfile.aspx

Friday, July 25, 2008

Streaming Versus Progressive Download

Streaming Versus Progressive Download :
-------------------------------------------------
There are trade-offs to consider when deciding whether to deliver a movie using progressive download, streaming, or broadcasting.
All QuickTime media types can be delivered as progressive downloads. Streaming is limited to sound, video, and text. Broadcasting is further limited to compression schemes and quality settings compatible with real-time capture and compression.
Progressive download works even when the bandwidth is not sufficient for real-time playback; it simply buffers incoming data and delivers delayed playback. Streaming and broadcasting are bandwidth limited; if the connection is not fast enough, the movie cannot play.
Streaming movies do not store a copy of the movie on the client computer, making them inherently more difficult to copy without the consent of the movie’s owner. This can be an important consideration, and is one reason why people choose streaming over progressive download.
Streams take up a specified amount of bandwidth, whereas HTTP file downloads proceed as quickly as the connection allows. It is therefore easier to manage the bandwidth usage of a streaming server than of a web server delivering progressive-download movies.
Broadcasting allows you to deliver coverage of live events as they happen, or to provide real time "chat" between computers.
To sum up, if your movie includes live coverage, you must use broadcasting. If bandwidth management and copy discouragement are paramount considerations, streaming may be your best choice for stored content. If simplicity, reliability, or quality regardless of connection speed are most important to you, progressive download is probably best.

Ref:
http://developer.apple.com/documentation/QuickTime/RM/Fundamentals/QTOverview/QTOverview_Document/chapter_1000_section_6.html

Thursday, July 24, 2008

How to download you tube video as MP4 file:

How to download you tube video as MP4 file:
-------------------------------------------------------------------
http://www.youtube.com/get_video?video_id=ID&t=SIGNATURE&fmt=18
Download Youtube video:
--------------------------------------
copy the Youtube link
and go to the following URL and give the youtube link in the following site:
"http://vixy.net/rawvideo/"
we will get something like "Right click here to save as mp4 file";
and right click it and open it in a new window;

we will get the URL like above
http://youtube.com/get_video.php?video_id=zskO9O3hF78&t=OEgsToPDskI7pZ1ChBCot9co5DvFysF9&fmt=18
then it will show the file download dialog; and click on save button so that we can save it in local disk;

http://youtube.com/get_video.php?video_id=xoKbDNY0Zwg&t=OEgsToPDskJI-k1mwTeJHOC9_dadX340&fmt=18

Progressive Download Streaming with Quicktime player

Progressive Download Streaming with Quicktime player:
-------------------------------------------------------
1.Install apache
For Progressive Download Streaming, we have to install the Apache server and configured the httpd.conf file's listen 80 as
listen desired_port
listen 8080
put streamable clips in htdocs( default virtual directory of the Apache)
2.Create Streamable clips with Helix mobile producer
Input/Output tab select the input and output file and select the export type and Click on "Export Settings and Meta data" tab and select Export Settings option
as Progressive Download and then select the "Encode" button;
Next thru Quicktime player or Progressive Download Streaming filter, we can do the following;
http://ApacheRunningIP:8080/clip_path
Qt player will play the file;

WMP doesnt load the Source Filter

WMP doesnt load the Source Filter Problem:
------------------------------------------------
if we are giving the URL like this:
http://www.youtube.com/get_video?video_id=AdZC8EAp62c&t=OEgsToPDskL9x6BrV7XCzFzahexhvDdL&fm t=18
then the windows media player is not loaded the PD Stream Source DLL;

Currently the youtube URL is not accessible in my system;So kindly give another Wrong URL and check whether the Windows Media player loads our PD Stream Source Filter;
Eventhough the invalid URL we have given to the Windows media player (http://10.205.3.30:8080/test.3gp), the Windows media player loads the PdStream Source filter;
http://10.203./get_video?video_id=AdZC8EAp62c&t=OEgsToPDskL9x6BrV7XCzFzahexhvDdL&fm

with URL without extension, it is not loading the source filter ( Ex: http://192.168.233:8080/test);
I got parameter incorrect if I am using the http://192.168.233:8080/




Even though we registered the custom file type( http://), Windows media player is not able to load the source filter; ( We already registered the File type and source filter in registry ).
Windows media player is able to load the source filter only for the following URL:
URL WindowsMediaPlayer error
http://192.10.198.2:5050/ Parameter Incorrect
http://192.168.10.3:5050/test SourceFilter for this file can't be loaded
http://192.168.10.3:5050/get_video?video_Id=adAZd& t=susisueie&fmt=18
Source Filter for this file can't be loaded

Source Filter class ID is:
{018778CE-07D8-4656-982E-B5DDB16C71DF}

We have not registered the File Extension type (http://) separately. So the Register Custom File type can be done within the Source Filter;( By addding the registry settings Programmatically);

HKEY_CLASSES_ROOT

Source Filter =
Extensions
<.ext1> =
<.ext2> =

if we have sepcified any extensions, if the URL is having that extension, then only the source filter will be loaded;
Ex:
HKEY_CLASSES_ROOT
http://
Source Filter =
Extensions
.ext1 =".mp3"
.ext2 =".mp4"
http://192.168.3.10:5000/test.mp4
http://192.168.3.10:5000/test.mp3 then only it will loads the Source filter
otherwise it will not loads the Source Filter;

Another way of resgistering the source Filter :
----------------------------------------------
HKCR\http
HKCR\http\extensions\
Key and value is as follows:
.mp4 { Filter CLASS ID}
.mp3 { Filter Class ID}
So whenever it is having URL with extension then only WMP will loads the specified class ID source filter;

Finally I have solved the problem;
Solution:
-------------
we have to add the Source Filter and class ID in the HKCR\http;
So now WMP will loads the Source Filter automatically;

Tuesday, July 15, 2008

Mp4UI tool is an open source file. Many converters are not properly converting the meta data. For Example, If I am converting the video from WMV to MP4 , the meta data available in a WMV is not available in ConvertedMP4 file;

So we can make use of the MP4UI tool to add meta data to the MP4 file;


MP4UI dialog:





Click New/Open button and select MP4 file;




Look at the Lower Right corner, it is having M symbol Just click on this;





After Entering all these information and enable the Part of a compilation check box and Click on OK; it will add the Meta data to the MP4 file;


Using this Metadata, we can know the artist, composer, Singer, Film and so on information about the media file;



Monday, July 14, 2008

Add Meta data to mp4 file thru MP4UI tool

m4a clip meta data problem

m4a clip meta data problem:
-------------------------------------------
we are having two different APIs to read iTune based file format and other 3GP standard file format;
iTune file format is based on Quicktime file format;
They called the m4a file as iTune file format;
M4A,MP4,M4V, MSNV ( MSNV for Sony PSP Video) files are based on the MPEG4 File Format;
MPEG4 File format is based on Quicktime container format;
iTune is also follows Quicktime container format;So quicktime 3GP,MP4 are all based on Quicktime container format;
Even though we rename the files from mp4 to 3GP or 3GP to mp4, the Quicktime displays the video and audio with meta data;
We are having two set of APIs One is to read meta data from Quicktime Meta data(Mp4) and another one is to read meta data from the
3GP file;
Solution:
------------
1.We can call the two APIs Successively to get the meta data information from a file;
2.We can call the Meta data APIs based on file type
( It may not display meta data if we renamed the 3GP file to MP4 file)
3.One more Solution is checking the File format;




Quicktime's iTune,MP4 and 3GP uses the same container format;
Atom:
--------
We can call the meta data APIs based on the file atom;
if the atom is 'ftyp';
Atom Syntax:
--------------
ftyp PreferredBrand MinorVersion CompatibleBrands NULL
Example:
ftyp 3GP4 00 00 02 00( version in Hex) 3Gp4 isom mp41
Compatible brands are
3GP4 isom, mp41;

Preferred Brand:
-----------------
1.4 bytes code ( it may be like 3GP or mp4)
Note:
--------
the Preferred Brand must occur in Compatible Brands list;

3GP standards may have the following
3GP4
3GP5
3GP6
So we will compare the first 3 codes with 3GP, if it is zero


if(_stricmp(code,"3gp") == 0) //Start Code is Equal;
{
}

for MP4 file, it may have mp4,m4v,m4a and isom and so on;
mp41,mp42
so we can make a check like "mp4" with first 3 start codes;
While downloading the MP4 file I faced the problem; its ftyp or Brand code is equal
to 'MSNV' it is not matching to any of the format;MSNV is a sony video PSP recorder format;
Quicktime will do the following to check the file format;

ftyp 3GP4 00 00 02 00( version in Hex) 3Gp4 isom mp41


Quicktime will enumerates the compatible Brands;
Quicktime will checks whether the file is according to the 3GP4 Spec,if it is not, then it will checks
the file format is in isom and if the file format is not in a format of Isom, then Quicktime will check with the
mp41 format; if it is the correct format, then according to the file format,Quicktime will read data from the
file; if media file is not matched with any of the compatible brands, then it will throw "Invalid file format specification"
error;

Thursday, July 10, 2008

Return values in function argument Pointers

Pointers Sample application:
----------------------------------------
int Function( char* arr)
{
strcpy(arr,"sun");
return 0;
}

void Main()
{
char arr[3];
Function(arr);
printf("\n %s",arr);
}
Expected Output : sun
Actual Result : I got some invalid junk data;


So I modified the main()fn call as follows:

void Main()
{
char arr[3];
Function(&arr[0]);
printf("\n %s",arr);
}
the Output is : sun

Return values at function argument pointers

Pointers Sample application:
----------------------------------------
int Function( char* arr)
{
strcpy(arr,"sun");
return 0;
}

void Main()
{
char arr[3];
Function(arr);
printf("\n %s",arr);
}
Expected Output : sun
Actual Result : I got some invalid junk data;


So I modified the main()fn call as follows:

void Main()
{
char arr[3];
Function(&arr[0]);
printf("\n %s",arr);
}
the Output is : sun

Differentiate 3GPP and Quicktime compliant

Differentiate 3GPP and Quicktime compliant:
---------------------------------------------------------------
if the atom is 'ftyp',then we will read next 3 bytes;
Based on these 3 bytes, we will identify whether the file is a quicktime compliant or
3GPP compliant;
it may be the following for 3GP compliant:
3GP or 3G2 ( ftyp3gp or ftyp3g2)
For Quicktime compliant ftyp following codes are :
mp4,m4v,m4a (ftypmp4 , ftypm4v,ftypm4a)

Note:
if the file type is .mp4, it may contains the ftyp3GP code

Wednesday, July 09, 2008

Camera Driver Pin resolution problem


Camera Driver Pin resolution problem:
---------------------------------------------------------
1.Capture pin resolutions must be supported in Preview Pin Resolution;
if Capture Pin resolutions are not matched with the Preview Pin Resolution;
Cause:
--------
This is the bug in camera or capture driver;Capture Pin resolution must be supported in Preview Pin resolution;

Monday, July 07, 2008

DSHOW ATL Error

When I try to compile and run the PlayWnd Example in 2005,
the following link problem appears.

error LNK2001: unresolved external symbol "unsigned int (__stdcall* ATL::g_pfnGetThreadACP)(void)" (?g_pfnGetThreadACP@ATL@@3P6GIXZA)

Solution:
------------
I found g_pfnGetThreadACP is defined in USES_CONVERSION which is used in W2T and so on.
Include the atlsd.lib ( It contains the USES_CONVERSION definition);

Observation about Less than 1 Sec Video

What I was Done:
------------------
1.I have Queried the GetCurrentPosition () fn from the Player Application;
TIME_FORMAT_MEDIA_TIME flag is set as same as our Filters;
30000 is the Maximum Duration means,if we set the 30001 will set the end of the file;



TrackType =0, Start : 0,Stop :990000
TrackType =0, Start : 990000,Stop :1990000
TrackType =0, Start : 1990000,Stop :3050000
TrackType =0, Start : 3050000,Stop :3990000
TrackType =0, Start : 3990000,Stop :4990000
TrackType =0, Start : 4990000,Stop :5980000
TrackType =0, Start : 5980000,Stop :6980000
Track Type:0 Track Duration is 798,Total Duration is:798
VideoTrack Reaches End: 8380000
TrackType =0, Start : 6980000,Stop :8380000
Stream Time is :7980000 Set it as Properly;
m_nCurrentPos :7980000 for the Last Media Sample we set the TimeStamp as 8380000;
It is more than the Timestamp;

I have used the Player Application there I get the total duration of the media file;
and moreover I printed the current Position using IMediaSeeking interface;
This is the Problem with multimedia file having less than 1 seconds of video and audio;
if I am adding the 200 milli seconds then it works fine;

Tuesday, July 01, 2008

windows media player Seek Bar will not work for video having less than 1 second duration?:


windows media player Seek Bar will not work for video having less than 1 second duration?:
-------------------------------------------------------------------------------------------
We checked whether the windows media player will not display the seek bar properly for video having less than 1 second duration;We recorded the wmv video using PIMG with less than 1 second; it is playing fine in media player;

within Motorola cell, we hard reset the device; By default it is having wmv recording;
and recorded the file for testing this scenario;

Problem while Playing the video less than one second

Problem while Playing the video less than one second:
-----------------------------------------------------
if the video is having less than one second, media player seek bar never reaches the end position;
I have tested the video which is having less than 8 seconds with
AAC Encoded audio;QCELP Encoded audio,AMR Nb Encoded audio in a 3GP file;
I got the Same Error as follows:

For the video less than 1 seconds:
-----------------------------------
GetDuration : 7930000
End Stop value of Media Sample is : 7190001
7190001 / 7930000 = 0.90 So the Trackbar is not shown at the end of the media player




For Video > 1 seconds:
-------------------
GetDuration : 172400000
End Stop Value of Media Sample is : 172200001
172200001 /172400000 = 0.99 So the Error is not shown clearly in this large video;
The Track bar never reaches the end position of the media player;
So the Problem is with the Timestamp calculation;

No Time stamp has been set for this sample Error

I got the "No Time stamp has been set for this sample" while playing the 3gp file;
-----------------------------------------------------------------------------------
HRESULT FillBuffer()
{
HRESULT hr;
hr = GetMP4Sample (pSample);
if(SUCCEEDED(hr))
{
REFERENCE_TIME rtStart = 0, rtStop = 0;
hr = pSample->GetTime(&rtStart, &rtStop);
if(SUCCEEDED(hr))
{
wchar_t szMsg[MAX_PATH];
swprintf(szMsg,L"\n 0-vid,1- audio,TrackType = %d, Start = %ld, Stop : %ld",m_nTrakType,(LONG)rtStart,(LONG) rtStop);
OutputDebugString(szMsg);
}
}
return hr;

}



Solution:
===========
the above code displays the "No timestamp has been set for this sample " error;
Reason is
if GetTime() is failed, then that hr value will be returned from the
FillBuffer() fn; So I got this error; if I modified it as follows, then I wont get an error;

hr = GetMP4Sample (pSample);
if(SUCCEEDED(hr))
{
REFERENCE_TIME rtStart = 0, rtStop = 0;
HRESULT temphr = pSample->GetTime(&rtStart, &rtStop);
if(SUCCEEDED(temphr))
{
wchar_t szMsg[MAX_PATH];
swprintf(szMsg,L"\n 0-vid,1- audio,TrackType = %d, Start = %ld, Stop : %ld",m_nTrakType,(LONG)rtStart,(LONG) rtStop);
OutputDebugString(szMsg);
}
}

Record the video and audio using PIMG with desired video and audio encoder

Record the video and audio using PIMG with desired video and audio encoder:
----------------------------------------------------------------------------

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Pictures\Camera\OEM\AudioEncoderClassID as AAC Encoder's ClassID
\VideoProfile\1\AudioEncoderCLassID
\VideoProfile\2\AudioEncoderClassID
\VideoProfile\3\AudioEncoderClassID;
\VideoEncoderClassID
\MuxerClassID

{f0db0b53-53ff-49b9-8c51-4ed44cec6d46} - AAC Encoder
just copy our encoder DMO class ID at the above specified registry location;
The Pimg ( Microsoft's PIMG pictures and videos) record , encode and multiplex the video according to the OEM registry settings;
I have tested this behavior in Motorola; But some mobiles may not implement this behavior; Example : Asus;
By Default if it is recording wmv file means, those informations will be available in this registry location;

Friday, June 27, 2008

Performance of video rendering in Capture application

Performance of video rendering in Capture application
-------------------------------------------------------------------------------
I am loading the COM environment for every graph building;
Loading of COM environment and releasing of an environment done by CoInitialize() fn and
CoUninitialize() fn;
it takes much time to render the video in Recorder application ( loading of COM environment every time takes much time);
if I am loading the COM environment everytime that takes much time but if I am loading the COM
environment only once that makes the rendering of video so fast compare to everytime loading of
COM environment;

Error While Releasing the Dshow Filters ( Windows Mobile)

Error while releasing filters:
-------------------------------
Data Abort: Thread=8a9e50b4 Proc=88a8b6d0 'RecorderApp.exe'
AKY=00020001 PC=000175f8(RecorderApp.exe+0x000075f8) RA=0001562c(RecorderApp.exe+0x0000562c) BVA=24000008 FSR=00000007
Solution:
----------
Just Stop the Recording using ControlStream() fn and Stop the Media Control ( pMediacontrol->Stop() fn);
and then Release the filters; this error wont occur;

CLSID_VideoCapture Filter Problem in windows Mobile Recorder application

CLSID_VideoCapture Filter problem:
----------------------------------------
I have developed the video capture application;
I have created the object for video capture filter( CoCreateInstance(CLSID_VideoCapture)),
and rendered the video successfully, if I released the video capture filter and created the CoCreateInstance(CLSID_VideoCapture Filter) I got the problem
in
PropertyBag->Load() fn;


CComPtr m_pPropertyBag;
CPropertyBag m_PropBag;
CComVariant m_varCamName;

hr = CoCreateInstance( CLSID_VideoCapture, NULL, CLSCTX_INPROC,
IID_IBaseFilter, (void** ) &m_pVideoCaptureFilter );

hr = m_pVideoCaptureFilter->QueryInterface( &m_pPropertyBag );
m_varCamName = _T ("CAM1:");
hr = m_PropBag.Write( L"VCapName", &m_varCamName );
hr = m_pPropertyBag->Load( &m_PropBag, NULL);
m_pPropertyBag.Release();
If I released the video capture filter and created the Second time , I got the error in


hr = m_pPropertyBag->Load( &m_PropBag, NULL); fn, it returns error as
" The Specified network resource or device is no longer available"


Solution:
------------
Error Message indicates "Video capture filter was not properly released;"
We need to release it properly;



We are releasing the video Capture Filter properly But it is not released; So Next time we got problems;
So Just confirm it, we have to release the video capture Filters only after all the Streams are stopped;




Code:
---------
For Releasing All the Filters and their Pins use the following code:

For Releasing the Source Filter use the following two methods and release all other things normally;
void CRecorder::NukeDownstream(IBaseFilter *pf)
{
IPin *pP=0, *pTo=0;
ULONG u;
IEnumPins *pins = NULL;
PIN_INFO pininfo;
if (!pf)
return;
HRESULT hr = pf->EnumPins(&pins);
pins->Reset();
while(hr == NOERROR)
{
hr = pins->Next(1, &pP, &u);
if(hr == S_OK && pP)
{
pP->ConnectedTo(&pTo);
if(pTo)
{
hr = pTo->QueryPinInfo(&pininfo);
if(hr == NOERROR)
{
if(pininfo.dir == PINDIR_INPUT)
{
NukeDownstream(pininfo.pFilter);
m_pGraphBuilder->Disconnect(pTo);
m_pGraphBuilder->Disconnect(pP);
m_pGraphBuilder->RemoveFilter(pininfo.pFilter);
}
pininfo.pFilter->Release();
}
pTo->Release();
}
pP->Release();
}
}
if(pins)
pins->Release();
}



void CRecorder::MyNukeDown( IBaseFilter *pFilter )
{
IEnumPins *pEnumPins;
IPin *pPin;
ULONG uNumPin;
HRESULT hr;
do
{
if( NULL == pFilter )
{
break;
}
hr = pFilter->EnumPins(&pEnumPins);
pEnumPins->Reset();
if( FAILED(hr) )
{
break;
}
while( S_OK == pEnumPins->Next( 1, &pPin, &uNumPin ) )
{
hr = m_pGraphBuilder->Disconnect(pPin);
if( S_OK != hr )
{
OutputDebugString(L"\n Disconect Failed");
}
}
m_pGraphBuilder->RemoveFilter( pFilter );
pFilter->Release();
//HELPER_RELEASE( pFilter );
pPin->Release();
pEnumPins->Release();
}while( FALSE );
}
Usage of this code is as follows:
NukeDownstream(m_pVideoCaptureFilter);
MyNukeDown(m_pVideoCaptureFilter);
CHECK_AND_RELEASE_COM_QI(m_pVideoWindow);
#ifdef _STILL
CHECK_AND_RELEASE_COM(m_pIImageSinkFilter);
CHECK_AND_RELEASE_COM(m_pImageSinkFilter);
#endif

CHECK_AND_RELEASE_COM(m_pVideoRenderer);
CHECK_AND_RELEASE_COM(m_pVideoEncoder);
CHECK_AND_RELEASE_COM(m_pVideoCaptureFilter);

Dshow Camera capture application Problem in Windows Mobile

Dshow Camera capture application Problem in Windows Mobile:
-----------------------------------------------------------
pimg.exe (Pictures and videos) behavior:
-----------------------------------------
if we are writing camera capture application, whenever the user hits Home button, the application must release the
Video and audio Capture Source; whenever the back button is pressed (if available) or Thru Task manager they switch to the Camera application ,
the camera capture application must starts its execution normally (render video from video source);
It makes use of the video and audio capture filters;
So Every camera capture application must follow this behavior;
I am writing the camera capture application, whenever the user hits the Home Button, the application focus is changed;
So The Windows Mobile OS will send the following messages sequentially;
1.WM_ACTIVATE 's WA_INACTIVE
2.WM_ACTIVATE 's WA_ACTIVE

So whenever the user hits the Home button, the WM_ACTIVATE message with WA_INACTIVE fired.
Steps:
-------
1.Home Button was pressed ( that indicates WM_ACTIVATE's WA_INACTIVE),Release the video and audio capture filters in WM_ACTIVATE's WA_INACTIVE and
set the boolean variable bInActive as TRUE;
2.When Back button was pressed, The camera application was focused; But it didnt get any message like focus or Activate app or WM_ACTIVATE message;
So what I did was, whenever our camera application got focus, it must paint the camera application window;
So within WM_PAINT message, I will check if bInActive flag was set, then CreateInstance for the video capture filter and audio capture filter and Starts rendering video;

Dshow Camera capture application Problem in Windows Mobile


Dshow Camera capture application Problem in Windows Mobile:
-----------------------------------------------------------
pimg.exe (Pictures and videos) behavior:
-----------------------------------------
if we are writing camera capture application, whenever the user hits Home button, the application must release the
Video and audio Capture Source; whenever the back button is pressed (if available) or Thru Task manager they switch to the Camera application ,
the camera capture application must starts its execution normally (render video from video source);
It makes use of the video and audio capture filters;
So Every camera capture application must follow this behavior;
I am writing the camera capture application, whenever the user hits the Home Button, the application focus is changed;
So The Windows Mobile OS will send the following messages sequentially;
1.WM_ACTIVATE 's WA_INACTIVE
2.WM_ACTIVATE 's WA_ACTIVE

So whenever the user hits the Home button, the WM_ACTIVATE message with WA_INACTIVE fired.
Steps:
-------
1.Home Button was pressed ( that indicates WM_ACTIVATE's WA_INACTIVE),Release the video and audio capture filters in WM_ACTIVATE's WA_INACTIVE and
set the boolean variable bInActive as TRUE;
2.When Back button was pressed, The camera application was focused; But it didnt get any message like focus or Activate app or WM_ACTIVATE message;
So what I did was, whenever our camera application got focus, it must paint the camera application window;
So within WM_PAINT message, I will check if bInActive flag was set, then CreateInstance for the video capture filter and audio capture filter and Starts rendering video;

Tuesday, June 24, 2008

VS2005 Output Window display Problem

VS2005 Output Window display Problem
--------------------------------------------------------------

While developing an application in VS2005,
I faced the problem in viewing Output Window; Even though
I have selected View -> Output, that doesnt display the Output Window;

Solution :
-----------
Window-> Reset window layout Now Click the View-> Output displays the
Output window;


Note:
----------
Not Only for Output Window, we can use the same approach for all
windows available in VS 2005.

Thursday, June 19, 2008

WinCE Camera application architecture

winCE Camera Capture application Architecture
-----------------------------------------------------------------------
1.Preview pin is connected to the Video Renderer
2.Capture pin is connected as follows:

Capture Pin -> Video Encoder -> Muxer filter


Normally preview pin will render the video; Capture Pin Stream is
stopped using ControlStream() fn;
whenever the "Start Recording" button was pressed then we need to
Start the capture Pin Stream;


In other words we need to Captured video data to be written into media
file as well as to be rendered on screen;


For doing this, we will connect the filters as follows:

Video Capture Pin -> Video Encoder->Muxer
Audio Capture Pin ->Audio Encoder->Muxer

video Preview pin -> Video Renderer;

Until the user gives "Start Recording" command, we need to Stop the
Stream in Video capture and Audio Capture pin;
we need to Start them only when the user gives "Start Recording";


if No media file was recorded, then during the connection itself the
muxer filter will creates an empty file;

we need to delete it ; if No media file was recorded, then the current
filename is to be deleted from the file system;

This is what we have done ;

WinCE Live Capture ControlStream problem

ControlStream Function:
------------------------

m_pCaptureGraphBuilder2->ControlStream //Stop the capture Pin
(
&PIN_CATEGORY_CAPTURE,
&MEDIATYPE_Video,
m_pVideoCaptureFilter,
&rtStart,
&rtStop,1,2
);


ControlStream of Live Source will do the Start and Stop of the Stream;


0,MAXLONGLONG - To Start the Stream
MAXLONGLONG,0 - To Stop the Stream;

Problem :
----------
1. I have Stopped the Capture Pin and Started the Preview Pin using
ControlStream() fn;

it gives the Black rectangle instead of rendering video;

Solution;
---------
I have deeply analyzed the Stream;

Reason for the problem is I have not Started the MediaControl using
Run() method;


if I called the IMediaControl's Run() method it works fine;


HRESULT StartPreview()
{

HRESULT hr = S_OK;
REFERENCE_TIME rtPreviewStart = 0,rtPreviewStop = MAXLONGLONG;
REFERENCE_TIME rtStart, rtStop;
rtStart = MAXLONGLONG;
rtStop = 0;


hr = m_pCaptureGraphBuilder2->ControlStream //Stop the capture Pin
(
&PIN_CATEGORY_CAPTURE,
&MEDIATYPE_Video,
m_pVideoCaptureFilter,
&rtStart,
&rtStop,1,2
);

if(FAILED(hr))
{
printf("\n VideoControlStream() Stop Failure Error Code:%x",hr);
return hr;
}
//Start the Preview Stream
hr = m_pCaptureGraphBuilder2->ControlStream( &PIN_CATEGORY_PREVIEW,
&MEDIATYPE_Video,
m_pVideoCaptureFilter,
&rtPreviewStart, // Start now.
&rtPreviewStop, // (Don't care.)
3,4
);


if(FAILED(hr))
{
printf("\n ControlStream() fn Error : %x",hr);
return hr;
}

hr = m_pCaptureGraphBuilder2->ControlStream( &PIN_CATEGORY_CAPTURE,
&MEDIATYPE_Audio,
m_pAudioCaptureFilter,
&rtStart, // Start now.
&rtStop, // (Don't care.)
5,6
);


if(FAILED(hr))
{
printf("\n Audio ControlStream() Stop Failure Error Code:%x",hr);
return hr;
}
}

if(m_pMediaControl)
{
m_pMediaControl->Run();
}

return hr;


}

Tuesday, June 17, 2008

WinCE Directshow Video Capture application Problem

I faced the problem in Video Capture application as follows:


RenderStream() fn executed without giving any problem for the first
time; During the second time, RenderStream() fn I got an error as
E_FAIL

Solution:
------------------

1.CLSID_VideoCapture ( Video Source Filters CoCreateInstance
must be called only once in a program otherwise we will get
RenderStream() fn failure error;

whenever we reconstruct the filter graphs by releasing filters in a
filter graph.
2.But For VideoCapture source filter we need to create the
object (using CoCreateInstance) and we have to use that object thru
out the application.

otherwise we will get RenderStream() fn failure message;

Saturday, May 31, 2008

Format Specifier ULONGLONG type

Format specifier for ULONGLONG type:
ULONGLONG t= 10292029202202LL;

%I64u
wchar_t szMsg[MAX_PATH];swprintf(szMsg,L"\n Value of t is : %I64u",t);
OutputDebugString(szMsg);

Thursday, May 22, 2008

Image Stride

When a video image is stored in memory, the memory buffer might contain extra padding bytes after each row of pixels. The padding bytes affect how the image is stored in memory, but do not affect how the image is displayed.

The stride is the number of bytes from one row of pixels in memory to the next row of pixels in memory. Stride is also called pitch. If padding bytes are present, the stride is wider than the width of the image, as shown in the following illustration.

Two buffers that contain video frames with equal dimensions can have two different strides. If you process a video image, you must take the stride into account.

In addition, there are two ways that an image can be arranged in memory. In a top-down image, the top row of pixels in the image appears first in memory. In a bottom-up image, the last row of pixels appears first in memory. The following illustration shows the difference between a top-down image and a bottom-up image.

A bottom-up image has a negative stride, because stride is defined as the number of bytes need to move down a row of pixels, relative to the displayed image. YUV images should always be top-down, and any image that is contained in a Direct3D surface must be top-down. RGB images in system memory are usually bottom-up.

Video transforms in particular need to handle buffers with mismatched strides, because the input buffer might not match the output buffer. For example, suppose that you want to convert a source image and write the result to a destination image. Assume that both images have the same width and height, but might not have the same pixel format or the same image stride.

The following example code shows a generalized approach for writing this kind of function. This is not a complete working example, because it abstracts many of the specific details.

Copy Code

void ProcessVideoImage(

BYTE* pDestScanLine0,

LONG lDestStride,

const BYTE* pSrcScanLine0,

LONG lSrcStride,

DWORD dwWidthInPixels,

DWORD dwHeightInPixels

)

{

for (DWORD y = 0; y < dwHeightInPixels; y++)

{

SOURCE_PIXEL_TYPE *pSrcPixel = (SOURCE_PIXEL_TYPE*)pDestScanLine0;

DEST_PIXEL_TYPE *pDestPixel = (DEST_PIXEL_TYPE*)pSrcScanLine0;



for (DWORD x = 0; x < dwWidthInPixels; x +=2)

{

pDestPixel[x] = TransformPixelValue(pSrcPixel[x]);

}

pDestScanLine0 += lDestStride;

pSrcScanLine0 += lSrcStride;

}

}

This function takes six parameters:

  • A pointer to the start of scan line 0 in the destination image.

  • The stride of the destination image.

  • A pointer to the start of scan line 0 in the source image.

  • The stride of the source image.

  • The width of the image in pixels.

  • The height of the image in pixels.

The general idea is to process one row at a time, iterating over each pixel in the row. Assume that SOURCE_PIXEL_TYPE and DEST_PIXEL_TYPE are structures representing the pixel layout for the source and destination images, respectively. (For example, 32-bit RGB uses the RGBQUAD structure. Not every pixel format has a pre-defined structure.) Casting the array pointer to the structure type enables you to access the RGB or YUV components of each pixel. At the start of each row, the function stores a pointer to the row. At the end of the row, it increments the pointer by the width of the image stride, which advances the pointer to the next row.

This example calls a hypothetical function named TransformPixelValue for each pixel. This could be any function that calculates a target pixel from a source pixel. Of course, the exact details will depend on the particular task. For example, if you have a planar YUV format, you must access the chroma planes independently from the luma plane; with interlaced video, you might need to process the fields separately; and so forth.

To give a more concrete example, the following code converts a 32-bit RGB image into an AYUV image. The RGB pixels are accessed using an RGBQUAD structure, and the AYUV pixels are accessed using a DXVA2_AYUVSample8 Structure structure.

Copy Code

//-------------------------------------------------------------------

// Name: RGB32_To_AYUV

// Description: Converts an image from RGB32 to AYUV

//-------------------------------------------------------------------

void RGB32_To_AYUV(

BYTE* pDest,

LONG lDestStride,

const BYTE* pSrc,

LONG lSrcStride,

DWORD dwWidthInPixels,

DWORD dwHeightInPixels

)

{

for (DWORD y = 0; y < dwHeightInPixels; y++)

{

RGBQUAD *pSrcPixel = (RGBQUAD*)pSrc;

DXVA2_AYUVSample8 *pDestPixel = (DXVA2_AYUVSample8*)pDest;

for (DWORD x = 0; x < dwWidthInPixels; x++)

{

pDestPixel[x].Alpha = 0x80;

pDestPixel[x].Y = RGBtoY(pSrcPixel[x]);

pDestPixel[x].Cb = RGBtoU(pSrcPixel[x]);

pDestPixel[x].Cr = RGBtoV(pSrcPixel[x]);

}

pDest += lDestStride;

pSrc += lSrcStride;

}

}

The next example converts a 32-bit RGB image to a YV12 image. This example shows how to handle a planar YUV format. (YV12 is a planar 4:2:0 format.) In this example, the function maintains three separate pointers for the three planes in the target image. However, the basic approach is the same as the previous example.

Copy Code

void RGB32_To_YV12(

BYTE* pDest,

LONG lDestStride,

const BYTE* pSrc,

LONG lSrcStride,

DWORD dwWidthInPixels,

DWORD dwHeightInPixels

)

{

assert(dwWidthInPixels % 2 == 0);

assert(dwHeightInPixels % 2 == 0);



const BYTE *pSrcRow = pSrc;

BYTE *pDestY = pDest;



// Calculate the offsets for the V and U planes.

// In YV12, each chroma plane has half the stride and half the height as the Y plane.

BYTE *pDestV = pDest + (lDestStride * dwHeightInPixels);

BYTE *pDestU = pDest + (lDestStride * dwHeightInPixels) + ( (lDestStride * dwHeightInPixels) / 4 );



// Convert the Y plane.

for (DWORD y = 0; y < dwHeightInPixels; y++)

{

RGBQUAD *pSrcPixel = (RGBQUAD*)pSrcRow;

for (DWORD x = 0; x < dwWidthInPixels; x++)

{

pDestY[x] = RGBtoY(pSrcPixel[x]); // Y0

}

pDestY += lDestStride;

pSrcRow += lSrcStride;

}



// Convert the V and U planes.

// YV12 is a 4:2:0 format, so each chroma sample is derived from four RGB pixels.

pSrcRow = pSrc;

for (DWORD y = 0; y < dwHeightInPixels; y += 2)

{

RGBQUAD *pSrcPixel = (RGBQUAD*)pSrcRow;

RGBQUAD *pNextSrcRow = (RGBQUAD*)(pSrcRow + lSrcStride);



BYTE *pbV = pDestV;

BYTE *pbU = pDestU;



for (DWORD x = 0; x < dwWidthInPixels; x += 2)

{

// Use a simple average to downsample the chroma.



*pbV++ = ( RGBtoV(pSrcPixel[x]) +

RGBtoV(pSrcPixel[x + 1]) +

RGBtoV(pNextSrcRow[x]) +

RGBtoV(pNextSrcRow[x + 1]) ) / 4;



*pbU++ = ( RGBtoU(pSrcPixel[x]) +

RGBtoU(pSrcPixel[x + 1]) +

RGBtoU(pNextSrcRow[x]) +

RGBtoU(pNextSrcRow[x + 1]) ) / 4;

}

pDestV += lDestStride / 2;

pDestU += lDestStride / 2;

// Skip two lines on the source image.

pSrcRow += (lSrcStride * 2);

}

}

In all of these examples, it is assumed that the application has already determined the image stride. You can sometimes get this information from the media buffer. Otherwise, you must calculate it based on the video format. For more information about calculating image stride and working with media buffers for video, see Uncompressed Video Buffers.