---------------------------
Hard coded break point
---------------------------
"IQualityControl::Notify not over-ridden from CBasePin. (IGNORE is OK)"
At line 2346 of C:\DXSDK\Samples\C++\DirectShow\BaseClasses\amfilter.cpp
Continue? (Cancel to debug)
---------------------------
Yes No Cancel
---------------------------
Solution :
This Error is displayed at runtime in a filter graph. So
for This Error I added the code as follows :
HRESULT Notify(IBaseFilter *pSelf,Quality q ) { return S_OK;}
in classes derived from CBasePin, Now it is working well...
Friday, December 21, 2007
Filter or DLL or .ocx registration Error
Filter or DLL or .ocx registration Error :
-------------------------------------------
Eventhough the DLL compiles and links well while registering the filter or DLL or.ocx using regsvr32 utility, I got this Error.
---------------------------
RegSvr32
---------------------------
.\Debug\HTTPWriterSinkFilterd.ax was loaded, but the DllRegisterServer entry point was not found.
.\Debug\HTTPWriterSinkFilterd.ax does not appear to be a .DLL or .OCX file.
---------------------------
OK
---------------------------
Solution Steps :
---------------
1.I Opened that DLL file in "Depends viewer" available in a visual studio tools
2. no function is exported from the DLL or filter file.
3.For Exporting the Functions, I used the .def file (module definition file).
4. Next I have opened the DLL,ax files in "Depends viewer" that are successfully registered
There the Dependency viewer displays the exported functions.
5. So the problem is in Exporting the functions
6. Next I opened the Project->Settings-> Linker->input ...
and I checked anywhere my .def file is specified there is nothing like that.
So I added the the following options
Project -> Settings-> Linker -> Input -> /DEF:My.def
Now I compiled the application and register it with regsvr32 utitily.
Now the registeration is done successfully .
if I opened that registered DLL in "Depends viewer", It displays the exported functions.
PostScript :
------------
In Some DLLs,even though they created the application as DLL,
without including this "/DEF:my.def", that dll also registers well.
-------------------------------------------
Eventhough the DLL compiles and links well while registering the filter or DLL or.ocx using regsvr32 utility, I got this Error.
---------------------------
RegSvr32
---------------------------
.\Debug\HTTPWriterSinkFilterd.ax was loaded, but the DllRegisterServer entry point was not found.
.\Debug\HTTPWriterSinkFilterd.ax does not appear to be a .DLL or .OCX file.
---------------------------
OK
---------------------------
Solution Steps :
---------------
1.I Opened that DLL file in "Depends viewer" available in a visual studio tools
2. no function is exported from the DLL or filter file.
3.For Exporting the Functions, I used the .def file (module definition file).
4. Next I have opened the DLL,ax files in "Depends viewer" that are successfully registered
There the Dependency viewer displays the exported functions.
5. So the problem is in Exporting the functions
6. Next I opened the Project->Settings-> Linker->input ...
and I checked anywhere my .def file is specified there is nothing like that.
So I added the the following options
Project -> Settings-> Linker -> Input -> /DEF:My.def
Now I compiled the application and register it with regsvr32 utitily.
Now the registeration is done successfully .
if I opened that registered DLL in "Depends viewer", It displays the exported functions.
PostScript :
------------
In Some DLLs,even though they created the application as DLL,
without including this "/DEF:my.def", that dll also registers well.
Thursday, December 20, 2007
How to pass arguments from window to the windowProc function
How to pass arguments from window to the windowProc function :
CREATESTRUCT str;
int*i = new int();
*i = 40;
str->lpCreateParams = i;
WindowProc()
{
switch(Msg)
{
int* j = (int*) ((LPCREATESTRUCT(lParam)) ->lpCreateParams)
}
}
Another one way is we can also attach the Window parameter at the time of
WNDcLASS registration.
wc.cbWndExtra = i;
we can get this value using GetWindowLong()
int* j = (int*) GetWindowLong(hWnd,GWL_USERDATA);
SetWindowLong() fn is also available
we can also use GetWindowLongPtr() or SetWindowLongPtr() for this purpose.
CREATESTRUCT str;
int*i = new int();
*i = 40;
str->lpCreateParams = i;
WindowProc()
{
switch(Msg)
{
int* j = (int*) ((LPCREATESTRUCT(lParam)) ->lpCreateParams)
}
}
Another one way is we can also attach the Window parameter at the time of
WNDcLASS registration.
wc.cbWndExtra = i;
we can get this value using GetWindowLong()
int* j = (int*) GetWindowLong(hWnd,GWL_USERDATA);
SetWindowLong() fn is also available
we can also use GetWindowLongPtr() or SetWindowLongPtr() for this purpose.
Window Creation problem with WM_NCCREATE message
I registered the class in win32 SDK application.
I created the window using CreateWindow() fn.CreateWindow() fn returns NULL.
if I checked the GetLastError(), it returns 0( SUCCESS).window creation is failed but
GetLastError() also not returns valid Error information.
I handled the WM_NCCREATE message within a Window Procedure.
solution :
within the WindowProc() fn , I handled the WM_NCCREATE fn as follows :
case WM_NCCREATE :
break;
This is the reason for window creation failure. if we handled this WM_NCCREATE fn,
we must return TRUE. then only window creation will succeeds.
I created the window using CreateWindow() fn.CreateWindow() fn returns NULL.
if I checked the GetLastError(), it returns 0( SUCCESS).window creation is failed but
GetLastError() also not returns valid Error information.
I handled the WM_NCCREATE message within a Window Procedure.
solution :
within the WindowProc() fn , I handled the WM_NCCREATE fn as follows :
case WM_NCCREATE :
break;
This is the reason for window creation failure. if we handled this WM_NCCREATE fn,
we must return TRUE. then only window creation will succeeds.
Friday, December 14, 2007
Identify the KeyFrame in a compressed input pin Filter
I developed the Netwriter Filter which accepts input as
Windows Media video.
within Netwriter Filter, how can we identify the KeyFrame ?
WMV is a compressed Format. How can we identify the Key Frame?
if the input pin of the transform filter is in compressed format,
How can we identify the KeyFrame ?...
Solution:
-----------
Using IMediaSample's IsSyncPoint() fn.
IsSyncPoint() fn returns S_OK if it is a KeyFrame.
otherwise it returns S_FALSE;
In Directshow, The technical name for the KeyFrame is Synchronization Point.
So we checked it as
IsSyncPoint ()
Windows Media video.
within Netwriter Filter, how can we identify the KeyFrame ?
WMV is a compressed Format. How can we identify the Key Frame?
if the input pin of the transform filter is in compressed format,
How can we identify the KeyFrame ?...
Solution:
-----------
Using IMediaSample's IsSyncPoint() fn.
IsSyncPoint() fn returns S_OK if it is a KeyFrame.
otherwise it returns S_FALSE;
In Directshow, The technical name for the KeyFrame is Synchronization Point.
So we checked it as
IsSyncPoint ()
How to identify the protected wmv/asf file?
How to identify the protected wmv/asf file ?
IWMReader ->Open( File)
if the wmv or asf file is not able to open the file,
it will returns error code as license required.
After this one, we can use the IWMDRMReader to query the license details.
Sample program available in Windows media Format SDK samples :
DRMShow.exe - i "DRMShow.wmv" // DRMShow.wmv is available in directX media dir path
IWMReader ->Open( File)
if the wmv or asf file is not able to open the file,
it will returns error code as license required.
After this one, we can use the IWMDRMReader to query the license details.
Sample program available in Windows media Format SDK samples :
DRMShow.exe - i "DRMShow.wmv" // DRMShow.wmv is available in directX media dir path
HTTPStreaming Problem without KeyFrame
1. I modified the IWMWriterAdvanced:: WriteStreamSample() fn's last parameter
as 0.( It Indicates the KeyFrame).
But I didnt get any display in Windows media player, but I got like
Buffer 98% complete...After certain interval it shows status as
Buffer 96% complete...
But It doesnt Display anything in media player.
2. How to change frame rate using the Transform Filter ? Is it Possible ?
yes it is possible.
Within FillBuffer() of the Source Filter we will Set the Frame rate ..
Just refer the PushSourceDesktop Filters. There we set the frame rate dynamically.
So it is also possible to change the frame rate thru transform filter.
We need to Set the Output Media Sample's StartTime and EndTime properly to achieve the specified frame rate.
as 0.( It Indicates the KeyFrame).
But I didnt get any display in Windows media player, but I got like
Buffer 98% complete...After certain interval it shows status as
Buffer 96% complete...
But It doesnt Display anything in media player.
2. How to change frame rate using the Transform Filter ? Is it Possible ?
yes it is possible.
Within FillBuffer() of the Source Filter we will Set the Frame rate ..
Just refer the PushSourceDesktop Filters. There we set the frame rate dynamically.
So it is also possible to change the frame rate thru transform filter.
We need to Set the Output Media Sample's StartTime and EndTime properly to achieve the specified frame rate.
How to change frame rate using the Transform Filter ? Is it Possible ?
How to change frame rate using the Transform Filter ? Is it Possible ?
yes it is possible.
Within FillBuffer() of the Source Filter we will Set the Frame rate ..
Just refer the PushSourceDesktop Filters. There we set the frame rate dynamically.
So it is also possible to change the frame rate thru transform filter.
We need to Set the Output Media Sample's StartTime and EndTime properly to achieve the specified frame rate.
This is what I Read :
----------------------
Hi, I think you must be more precise in describing what you want to achieve. Do you actually mean : display only about half of the frames, in half of the original clip duration, resulting in fact in a 100% speed up of the clip? Or something else?
As you are using EZRGB24, there are two things you can do easily :
1) you can skip a frame by returning S_FALSE in the transform function. Add in_framecount and out_framecount as members of the Filter.
HRESULT CEZrgb24::Transform(IMediaSample *pIn, IMediaSample *pOut){
in_frameount++; // we count all incoming samples
if (somefunction of in_framecount) { return S_FALSE}; // skip if somefunction is true
out_framecount++; // here we count only actually outgoing samples
// now we process the samples
HRESULT hr = copy(pIn, pOut);
......
}
2) you can adjust the sample times by setting the pOut samples start and stop times. In fact, if you skip samples, then you must retime all samples afterwards to make sure they are correctly timestamped for display by the renderer. You should do this in the same transform function after callling the copy function. Use the out_framecount to compute the start and stop times, and set them on the pOut sample using pOut->SetTime();
Typically, you should have :
tStart = out_counter * frameduration; tStop = tStart + frameduration;
You can speed up the display by reducing frameduration.
yes it is possible.
Within FillBuffer() of the Source Filter we will Set the Frame rate ..
Just refer the PushSourceDesktop Filters. There we set the frame rate dynamically.
So it is also possible to change the frame rate thru transform filter.
We need to Set the Output Media Sample's StartTime and EndTime properly to achieve the specified frame rate.
This is what I Read :
----------------------
Hi, I think you must be more precise in describing what you want to achieve. Do you actually mean : display only about half of the frames, in half of the original clip duration, resulting in fact in a 100% speed up of the clip? Or something else?
As you are using EZRGB24, there are two things you can do easily :
1) you can skip a frame by returning S_FALSE in the transform function. Add in_framecount and out_framecount as members of the Filter.
HRESULT CEZrgb24::Transform(IMediaSample *pIn, IMediaSample *pOut){
in_frameount++; // we count all incoming samples
if (somefunction of in_framecount) { return S_FALSE}; // skip if somefunction is true
out_framecount++; // here we count only actually outgoing samples
// now we process the samples
HRESULT hr = copy(pIn, pOut);
......
}
2) you can adjust the sample times by setting the pOut samples start and stop times. In fact, if you skip samples, then you must retime all samples afterwards to make sure they are correctly timestamped for display by the renderer. You should do this in the same transform function after callling the copy function. Use the out_framecount to compute the start and stop times, and set them on the pOut sample using pOut->SetTime();
Typically, you should have :
tStart = out_counter * frameduration; tStop = tStart + frameduration;
You can speed up the display by reducing frameduration.
HTTPStream Rendering problem
HTTPStream Rendering problem :
---------------------------------
I have developed the transform filter for writing the WMV
data to the HTTP Stream.It is writing data properly .
But If I render the video thru windows media player by specifying URL,
It doesnt render data properly.
But Windows media player Shows the status like
Buffer 99% complete
Playing 302Kb/s
But it doesnt render anything on the screen.
Solution :
-------------
Previously I set the KeyFrame for every 60 frames...
I used the wmvnetwrite sample application for testing.
It is sending wmv data to the HTTp stream. For any wmv file, It has OnStreamSample() callback fn
For Every frame, it will calls this function.
within this function, I checked the last parameter( KeyFrame flag), if it is true,
then I will print the Key FrameFound and the frameNumber in debug string.
I noted one thing. Key Frame is in some random order.
So I set the keyframe for every 4 frames. Now it is working.
without WM_SF_CLEANPOINT(Indicates KeyFrame), windows media player will not render data properly.
How can we identify the Key Frames within a filter ?...
---------------------------------
I have developed the transform filter for writing the WMV
data to the HTTP Stream.It is writing data properly .
But If I render the video thru windows media player by specifying URL,
It doesnt render data properly.
But Windows media player Shows the status like
Buffer 99% complete
Playing 302Kb/s
But it doesnt render anything on the screen.
Solution :
-------------
Previously I set the KeyFrame for every 60 frames...
I used the wmvnetwrite sample application for testing.
It is sending wmv data to the HTTp stream. For any wmv file, It has OnStreamSample() callback fn
For Every frame, it will calls this function.
within this function, I checked the last parameter( KeyFrame flag), if it is true,
then I will print the Key FrameFound and the frameNumber in debug string.
I noted one thing. Key Frame is in some random order.
So I set the keyframe for every 4 frames. Now it is working.
without WM_SF_CLEANPOINT(Indicates KeyFrame), windows media player will not render data properly.
How can we identify the Key Frames within a filter ?...
Wednesday, December 12, 2007
Remote Desktop Capture using DSNetwork Sender and Receiver
1.I modified the PushSource Desktop output pin's type as
MEDIATYPE_Stream, MEDIASUBTYPE_NULL or CLSID_NULL and So on.
2.In the Code SetSockopt() fn used for making the socket connection as multicast.
network interface card address is passed as an argument to this function ( SetsockOpt()).
I commented this code in DSNetwork Sender and Receiversample application in
DXSDK.
I modified the DSNetwork and DSReceiver Filter's property pages as Simply store
what we type as IPAddress.
3.I developed the raw transform filter which accepts MEDIATYPE_Stream as input and
gives MEDIATYPE_Video and we can connect it to the renderer
I tested this filter by doing the following :
PushSourceDesktop Filter ->Raw Transform filter-> LEAD RGB converter -> Video Render
within Raw Transform filter, I hardcoded the video width as 350 and height as 288, bitcount as 16
and RGB555.
4.next Integrated the raw transform filter with DSNetwork Sender and receiver.
5. within one PC, I captured the desktop using filter
Push Source Desktop_MEDIATypeStream -> MPEG Multicast Sender
6. On another PC, I constuct the graph as follows :
MPEG Multicast Receiver -> Raw Transform Filter -> Video Renderer
Actually the MPEG multicast receiver output pin's media type is MEDIAType_Stream...
Now the captured desktop is being displayed in remote system's video Renderer.
DrawBacks :
-----------
1. It consumes more network bandwidth...
we tested it within LAN. it requires more speed...
within this method, we got the framerate as 127.
Next we have to develop the Network Sender and Network Receiver as filter with WMV format...
MEDIATYPE_Stream, MEDIASUBTYPE_NULL or CLSID_NULL and So on.
2.In the Code SetSockopt() fn used for making the socket connection as multicast.
network interface card address is passed as an argument to this function ( SetsockOpt()).
I commented this code in DSNetwork Sender and Receiversample application in
DXSDK.
I modified the DSNetwork and DSReceiver Filter's property pages as Simply store
what we type as IPAddress.
3.I developed the raw transform filter which accepts MEDIATYPE_Stream as input and
gives MEDIATYPE_Video and we can connect it to the renderer
I tested this filter by doing the following :
PushSourceDesktop Filter ->Raw Transform filter-> LEAD RGB converter -> Video Render
within Raw Transform filter, I hardcoded the video width as 350 and height as 288, bitcount as 16
and RGB555.
4.next Integrated the raw transform filter with DSNetwork Sender and receiver.
5. within one PC, I captured the desktop using filter
Push Source Desktop_MEDIATypeStream -> MPEG Multicast Sender
6. On another PC, I constuct the graph as follows :
MPEG Multicast Receiver -> Raw Transform Filter -> Video Renderer
Actually the MPEG multicast receiver output pin's media type is MEDIAType_Stream...
Now the captured desktop is being displayed in remote system's video Renderer.
DrawBacks :
-----------
1. It consumes more network bandwidth...
we tested it within LAN. it requires more speed...
within this method, we got the framerate as 127.
Next we have to develop the Network Sender and Network Receiver as filter with WMV format...
Monday, December 10, 2007
UDP Server and Client application
UDPServer :
--------------
1.WSAStartup()
2.socket()
3.bind()
4.recv()
5.send()
UDPClient :
------------
1.WSAStartup()
2.gethostbyaddr()
3.socket()
4.connect()
5.send()
6.receive()
UDPServer:
----------
struct sockaddr_in local, from;
WSADATA wsaData;
SOCKET listen_socket, msgsock;
WSAStartup(0x202, &wsaData)
local.sin_family = AF_INET;
local.sin_addr.s_addr = (!ip_address) ? INADDR_ANY:inet_addr(ip_address);
local.sin_port = htons(port);
listen_socket = socket(AF_INET, SOCK_DGRAM,0);
if (listen_socket == INVALID_SOCKET){ //Error}
if (bind(listen_socket, (struct sockaddr*)&local, sizeof(local)) == SOCKET_ERROR){ //error}
msgsock = listen_socket;
retval = recvfrom(msgsock,Buffer, sizeof(Buffer), 0, (struct sockaddr *)&from, &fromlen);
if(retval == SOCKET_ERROR) {return;}
//send the received data back to the client...
retval = sendto(msgsock, Buffer, sizeof(Buffer), 0, (struct sockaddr *)&from, fromlen);
UDP client :
---------------
struct sockaddr_in server;
struct hostent *hp;
WSADATA wsaData;
SOCKET conn_socket;
if (isalpha(server_name[0]))
{ // server address is a name
hp = gethostbyname(server_name);
}
else
{ // Convert nnn.nnn address to a usable one
addr = inet_addr(server_name);
hp = gethostbyaddr((char *)&addr, 4, AF_INET);
}
memset(&server, 0, sizeof(server));
memcpy(&(server.sin_addr), hp->h_addr, hp->h_length);
server.sin_family = hp->h_addrtype;
server.sin_port = htons(port);
conn_socket = socket(AF_INET, SOCK_DGRAM, 0);
if (connect(conn_socket, (struct sockaddr*)&server, sizeof(server)) == SOCKET_ERROR) { //Error}
retval = send(conn_socket, Buffer, sizeof(Buffer), 0);
retval = recv(conn_socket, Buffer, sizeof(Buffer), 0);
closesocket(conn_socket);
WSACleanup();
--------------
1.WSAStartup()
2.socket()
3.bind()
4.recv()
5.send()
UDPClient :
------------
1.WSAStartup()
2.gethostbyaddr()
3.socket()
4.connect()
5.send()
6.receive()
UDPServer:
----------
struct sockaddr_in local, from;
WSADATA wsaData;
SOCKET listen_socket, msgsock;
WSAStartup(0x202, &wsaData)
local.sin_family = AF_INET;
local.sin_addr.s_addr = (!ip_address) ? INADDR_ANY:inet_addr(ip_address);
local.sin_port = htons(port);
listen_socket = socket(AF_INET, SOCK_DGRAM,0);
if (listen_socket == INVALID_SOCKET){ //Error}
if (bind(listen_socket, (struct sockaddr*)&local, sizeof(local)) == SOCKET_ERROR){ //error}
msgsock = listen_socket;
retval = recvfrom(msgsock,Buffer, sizeof(Buffer), 0, (struct sockaddr *)&from, &fromlen);
if(retval == SOCKET_ERROR) {return;}
//send the received data back to the client...
retval = sendto(msgsock, Buffer, sizeof(Buffer), 0, (struct sockaddr *)&from, fromlen);
UDP client :
---------------
struct sockaddr_in server;
struct hostent *hp;
WSADATA wsaData;
SOCKET conn_socket;
if (isalpha(server_name[0]))
{ // server address is a name
hp = gethostbyname(server_name);
}
else
{ // Convert nnn.nnn address to a usable one
addr = inet_addr(server_name);
hp = gethostbyaddr((char *)&addr, 4, AF_INET);
}
memset(&server, 0, sizeof(server));
memcpy(&(server.sin_addr), hp->h_addr, hp->h_length);
server.sin_family = hp->h_addrtype;
server.sin_port = htons(port);
conn_socket = socket(AF_INET, SOCK_DGRAM, 0);
if (connect(conn_socket, (struct sockaddr*)&server, sizeof(server)) == SOCKET_ERROR) { //Error}
retval = send(conn_socket, Buffer, sizeof(Buffer), 0);
retval = recv(conn_socket, Buffer, sizeof(Buffer), 0);
closesocket(conn_socket);
WSACleanup();
TCP Server and Client application
TCPServer :
-----------
struct sockaddr_in local, from;
WSADATA wsaData;
SOCKET listen_socket, msgsock;
if ((retval = WSAStartup(0x202, &wsaData)) != 0) {//Error}
local.sin_family = AF_INET;
local.sin_addr.s_addr = (!ip_address) ? INADDR_ANY:inet_addr(ip_address);
local.sin_port = htons(port);
listen_socket = socket(AF_INET, SOCK_STREAM,0);
if (bind(listen_socket, (struct sockaddr*)&local, sizeof(local)) == SOCKET_ERROR) { //Error Return}
//we can't use a UDP socket
if (listen(listen_socket,5) == SOCKET_ERROR) {//Error Return}
msgsock = accept(listen_socket, (struct sockaddr*)&from, &fromlen);
retval = recv(msgsock, Buffer, sizeof(Buffer), 0);
retval = send(msgsock, Buffer, sizeof(Buffer), 0);
TCP client :
-------------
struct sockaddr_in server;
struct hostent *hp;
WSADATA wsaData;
SOCKET conn_socket;
if ((retval = WSAStartup(0x202, &wsaData)) != 0) {//Error}
if (isalpha(server_name[0]))
{ // server address is a name
hp = gethostbyname(server_name);
}
else
{ // Convert nnn.nnn address to a usable one
addr = inet_addr(server_name);
hp = gethostbyaddr((char *)&addr, 4, AF_INET);
}
if(hp == NULL) { //Error return }
memset(&server, 0, sizeof(server));
memcpy(&(server.sin_addr), hp->h_addr, hp->h_length);
server.sin_family = hp->h_addrtype;
server.sin_port = htons(port);
conn_socket = socket(AF_INET, SOCK_STREAM, 0); /* Open a socket */
if (connect(conn_socket, (struct sockaddr*)&server, sizeof(server)) == SOCKET_ERROR) {//Error Return}
retval = send(conn_socket, Buffer, sizeof(Buffer), 0);
retval = recv(conn_socket, Buffer, sizeof(Buffer), 0);
closesocket(conn_socket);
WSACleanup();
-----------
struct sockaddr_in local, from;
WSADATA wsaData;
SOCKET listen_socket, msgsock;
if ((retval = WSAStartup(0x202, &wsaData)) != 0) {//Error}
local.sin_family = AF_INET;
local.sin_addr.s_addr = (!ip_address) ? INADDR_ANY:inet_addr(ip_address);
local.sin_port = htons(port);
listen_socket = socket(AF_INET, SOCK_STREAM,0);
if (bind(listen_socket, (struct sockaddr*)&local, sizeof(local)) == SOCKET_ERROR) { //Error Return}
//we can't use a UDP socket
if (listen(listen_socket,5) == SOCKET_ERROR) {//Error Return}
msgsock = accept(listen_socket, (struct sockaddr*)&from, &fromlen);
retval = recv(msgsock, Buffer, sizeof(Buffer), 0);
retval = send(msgsock, Buffer, sizeof(Buffer), 0);
TCP client :
-------------
struct sockaddr_in server;
struct hostent *hp;
WSADATA wsaData;
SOCKET conn_socket;
if ((retval = WSAStartup(0x202, &wsaData)) != 0) {//Error}
if (isalpha(server_name[0]))
{ // server address is a name
hp = gethostbyname(server_name);
}
else
{ // Convert nnn.nnn address to a usable one
addr = inet_addr(server_name);
hp = gethostbyaddr((char *)&addr, 4, AF_INET);
}
if(hp == NULL) { //Error return }
memset(&server, 0, sizeof(server));
memcpy(&(server.sin_addr), hp->h_addr, hp->h_length);
server.sin_family = hp->h_addrtype;
server.sin_port = htons(port);
conn_socket = socket(AF_INET, SOCK_STREAM, 0); /* Open a socket */
if (connect(conn_socket, (struct sockaddr*)&server, sizeof(server)) == SOCKET_ERROR) {//Error Return}
retval = send(conn_socket, Buffer, sizeof(Buffer), 0);
retval = recv(conn_socket, Buffer, sizeof(Buffer), 0);
closesocket(conn_socket);
WSACleanup();
YUY2 to RGB24 conversion
#define FIXNUM 16
#define FIX(a, b) ((int)((a)*(1<<(b))))
#define UNFIX(a, b) ((a+(1<<(b-1)))>>(b))
// Approximate 255 by 256
#define ICCIRUV(x) (((x)<<8)/224)
#define ICCIRY(x) ((((x)-16)<<8)/219)
// Clip out-range values
#define CLIP(t) (((t)>255)?255:(((t)<0)?0:(t)))
#define GET_R_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(1.402, FIXNUM)*(v)), FIXNUM)
#define GET_G_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(-0.344, FIXNUM)*(u) + FIX(-0.714, FIXNUM)*(v)), FIXNUM)
#define GET_B_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(1.772, FIXNUM)*(u)), FIXNUM)
#define GET_Y_FROM_RGB(r, g, b) UNFIX((FIX(0.299, FIXNUM)*(r) + FIX(0.587, FIXNUM)*(g) + FIX(0.114, FIXNUM)*(b)), FIXNUM)
#define GET_U_FROM_RGB(r, g, b) UNFIX((FIX(-0.169, FIXNUM)*(r) + FIX(-0.331, FIXNUM)*(g) + FIX(0.500, FIXNUM)*(b)), FIXNUM)
#define GET_V_FROM_RGB(r, g, b) UNFIX((FIX(0.500, FIXNUM)*(r) + FIX(-0.419, FIXNUM)*(g) + FIX(-0.081, FIXNUM)*(b)), FIXNUM)
bool CYUVToRGB::YUV422P_to_RGB24V2(int width, int height, unsigned char *s,unsigned char *d)
{
int i;
unsigned char *p_dest;
unsigned char y1, u, y2, v;
int Y1, Y2, U, V;
unsigned char r, g, b;
p_dest = d;
int size = height * (width / 2);
unsigned long srcIndex = 0;
unsigned long dstIndex = 0;
try
{
for(i = 0 ; i < size ; i++)
{
y1 = s[srcIndex];
u = s[srcIndex+ 1];
y2 = s[srcIndex+ 2];
v = s[srcIndex+ 3];
Y1 = ICCIRY(y1);
U = ICCIRUV(u - 128);
Y2 = ICCIRY(y2);
V = ICCIRUV(v - 128);
r = CLIP(GET_R_FROM_YUV(Y1, U, V));
g = CLIP(GET_G_FROM_YUV(Y1, U, V));
b = CLIP(GET_B_FROM_YUV(Y1, U, V));
p_dest[dstIndex] = b;
p_dest[dstIndex + 1] = g;
p_dest[dstIndex + 2] = r;
dstIndex += 3;
r = CLIP(GET_R_FROM_YUV(Y2, U, V));
g = CLIP(GET_G_FROM_YUV(Y2, U, V));
b = CLIP(GET_B_FROM_YUV(Y2, U, V));
p_dest[dstIndex] = b;
p_dest[dstIndex + 1] = g;
p_dest[dstIndex + 2] = r;
dstIndex += 3;
srcIndex += 4;
}
return true;
}
catch(...)
{
OutputDebugString("\n YUV422P to RGB24V2 Failed");
return false;
}
}
#define FIX(a, b) ((int)((a)*(1<<(b))))
#define UNFIX(a, b) ((a+(1<<(b-1)))>>(b))
// Approximate 255 by 256
#define ICCIRUV(x) (((x)<<8)/224)
#define ICCIRY(x) ((((x)-16)<<8)/219)
// Clip out-range values
#define CLIP(t) (((t)>255)?255:(((t)<0)?0:(t)))
#define GET_R_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(1.402, FIXNUM)*(v)), FIXNUM)
#define GET_G_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(-0.344, FIXNUM)*(u) + FIX(-0.714, FIXNUM)*(v)), FIXNUM)
#define GET_B_FROM_YUV(y, u, v) UNFIX((FIX(1.0, FIXNUM)*(y) + FIX(1.772, FIXNUM)*(u)), FIXNUM)
#define GET_Y_FROM_RGB(r, g, b) UNFIX((FIX(0.299, FIXNUM)*(r) + FIX(0.587, FIXNUM)*(g) + FIX(0.114, FIXNUM)*(b)), FIXNUM)
#define GET_U_FROM_RGB(r, g, b) UNFIX((FIX(-0.169, FIXNUM)*(r) + FIX(-0.331, FIXNUM)*(g) + FIX(0.500, FIXNUM)*(b)), FIXNUM)
#define GET_V_FROM_RGB(r, g, b) UNFIX((FIX(0.500, FIXNUM)*(r) + FIX(-0.419, FIXNUM)*(g) + FIX(-0.081, FIXNUM)*(b)), FIXNUM)
bool CYUVToRGB::YUV422P_to_RGB24V2(int width, int height, unsigned char *s,unsigned char *d)
{
int i;
unsigned char *p_dest;
unsigned char y1, u, y2, v;
int Y1, Y2, U, V;
unsigned char r, g, b;
p_dest = d;
int size = height * (width / 2);
unsigned long srcIndex = 0;
unsigned long dstIndex = 0;
try
{
for(i = 0 ; i < size ; i++)
{
y1 = s[srcIndex];
u = s[srcIndex+ 1];
y2 = s[srcIndex+ 2];
v = s[srcIndex+ 3];
Y1 = ICCIRY(y1);
U = ICCIRUV(u - 128);
Y2 = ICCIRY(y2);
V = ICCIRUV(v - 128);
r = CLIP(GET_R_FROM_YUV(Y1, U, V));
g = CLIP(GET_G_FROM_YUV(Y1, U, V));
b = CLIP(GET_B_FROM_YUV(Y1, U, V));
p_dest[dstIndex] = b;
p_dest[dstIndex + 1] = g;
p_dest[dstIndex + 2] = r;
dstIndex += 3;
r = CLIP(GET_R_FROM_YUV(Y2, U, V));
g = CLIP(GET_G_FROM_YUV(Y2, U, V));
b = CLIP(GET_B_FROM_YUV(Y2, U, V));
p_dest[dstIndex] = b;
p_dest[dstIndex + 1] = g;
p_dest[dstIndex + 2] = r;
dstIndex += 3;
srcIndex += 4;
}
return true;
}
catch(...)
{
OutputDebugString("\n YUV422P to RGB24V2 Failed");
return false;
}
}
Wednesday, December 05, 2007
How to use GDI+ ?
How to use GDI+ ?
For Gdi+, I added the Gdiplus.dll in C:\windows\system32 directory.
include the following lines in "stdafx.h"
#include
using namespace Gdiplus;
if the VC_EXTRALEAN or WIN32_LEAN_AND_MEAN is defined, the compiler generates
error message.
Remove these macros for using Gdi+ in VC++.
For Gdi+, I added the Gdiplus.dll in C:\windows\system32 directory.
include the following lines in "stdafx.h"
#include
using namespace Gdiplus;
if the VC_EXTRALEAN or WIN32_LEAN_AND_MEAN is defined, the compiler generates
error message.
Remove these macros for using Gdi+ in VC++.
How to set output pin's media type as YUY2 ?...
we have to override the getmediaType() fn
HRESULT CRGBToYUV::GetMediaType(int iPosition, CMediaType *pMediaType)
{
// Is the input pin connected
if (m_pInput->IsConnected() == FALSE)
{
return E_UNEXPECTED;
}
// This should never happen
if (iPosition < 0)
{
return E_INVALIDARG;
}
// Do we have more items to offer
if (iPosition > 0) {
return VFW_S_NO_MORE_ITEMS;
}
CMediaType mt = m_pInput->CurrentMediaType();
VIDEOINFOHEADER* vih = (VIDEOINFOHEADER*) mt.pbFormat;
VIDEOINFO *pvi = (VIDEOINFO*)pMediaType->AllocFormatBuffer(sizeof(VIDEOINFO));
if (pvi == 0)
return(E_OUTOFMEMORY);
ZeroMemory(pvi, pMediaType->cbFormat);
//pvi->AvgTimePerFrame = m_rtFrameLength;
pvi->bmiHeader.biWidth = vih->bmiHeader.biWidth;
pvi->bmiHeader.biHeight = vih->bmiHeader.biHeight;
m_iImageWidth = vih->bmiHeader.biWidth;
m_iImageHeight = vih->bmiHeader.biHeight;
pvi->bmiHeader.biCompression = MAKEFOURCC('Y', 'U', 'Y','2');//MAKEFOURCC('U', 'Y', 'V', 'Y');
pvi->bmiHeader.biBitCount = 16;
pvi->bmiHeader.biSizeImage = GetBitmapSize(&pvi->bmiHeader);
// Clear source and target rectangles
SetRectEmpty(&(pvi->rcSource)); // we want the whole image area rendered
SetRectEmpty(&(pvi->rcTarget)); // no particular destination rectangle
pMediaType->SetType(&MEDIATYPE_Video);
pMediaType->SetFormatType(&FORMAT_VideoInfo);
pMediaType->SetTemporalCompression(FALSE);
// Work out the GUID for the subtype from the header info.
const GUID SubTypeGUID = GetBitmapSubtype(&pvi->bmiHeader);
pMediaType->SetSubtype(&SubTypeGUID);
pMediaType->SetSampleSize(pvi->bmiHeader.biSizeImage);
m_pRGBBuffer = new BYTE[pvi->bmiHeader.biWidth *pvi->bmiHeader.biHeight *3];
m_pYUVBuffer = new BYTE[pvi->bmiHeader.biWidth *pvi->bmiHeader.biHeight * 2];
return NOERROR;
}
we have to override the getmediaType() fn
HRESULT CRGBToYUV::GetMediaType(int iPosition, CMediaType *pMediaType)
{
// Is the input pin connected
if (m_pInput->IsConnected() == FALSE)
{
return E_UNEXPECTED;
}
// This should never happen
if (iPosition < 0)
{
return E_INVALIDARG;
}
// Do we have more items to offer
if (iPosition > 0) {
return VFW_S_NO_MORE_ITEMS;
}
CMediaType mt = m_pInput->CurrentMediaType();
VIDEOINFOHEADER* vih = (VIDEOINFOHEADER*) mt.pbFormat;
VIDEOINFO *pvi = (VIDEOINFO*)pMediaType->AllocFormatBuffer(sizeof(VIDEOINFO));
if (pvi == 0)
return(E_OUTOFMEMORY);
ZeroMemory(pvi, pMediaType->cbFormat);
//pvi->AvgTimePerFrame = m_rtFrameLength;
pvi->bmiHeader.biWidth = vih->bmiHeader.biWidth;
pvi->bmiHeader.biHeight = vih->bmiHeader.biHeight;
m_iImageWidth = vih->bmiHeader.biWidth;
m_iImageHeight = vih->bmiHeader.biHeight;
pvi->bmiHeader.biCompression = MAKEFOURCC('Y', 'U', 'Y','2');//MAKEFOURCC('U', 'Y', 'V', 'Y');
pvi->bmiHeader.biBitCount = 16;
pvi->bmiHeader.biSizeImage = GetBitmapSize(&pvi->bmiHeader);
// Clear source and target rectangles
SetRectEmpty(&(pvi->rcSource)); // we want the whole image area rendered
SetRectEmpty(&(pvi->rcTarget)); // no particular destination rectangle
pMediaType->SetType(&MEDIATYPE_Video);
pMediaType->SetFormatType(&FORMAT_VideoInfo);
pMediaType->SetTemporalCompression(FALSE);
// Work out the GUID for the subtype from the header info.
const GUID SubTypeGUID = GetBitmapSubtype(&pvi->bmiHeader);
pMediaType->SetSubtype(&SubTypeGUID);
pMediaType->SetSampleSize(pvi->bmiHeader.biSizeImage);
m_pRGBBuffer = new BYTE[pvi->bmiHeader.biWidth *pvi->bmiHeader.biHeight *3];
m_pYUVBuffer = new BYTE[pvi->bmiHeader.biWidth *pvi->bmiHeader.biHeight * 2];
return NOERROR;
}
Flipped Video problem in DirectShow
Normally in Directshow we face the Flipped image problem...
if we are having image buffer, how can we flip it using code ?
This is being done by the following code...
Solution :
------------
Here what we have done was...
ScanLine1- 0 - Scanwidth pixels
ScanLine2 - 0 to ScanWidth Pixels
upto
ScanLineN
we have to modify it as follows for Flipping the Image Data :
ScanLine N and its pixels...
ScanLine N-1 and its Pixels
upto
ScanLine 1
For Doing this, we have developed the following code :
void FlipUpDown(BYTE* pData,int gWidth,int gHeight,int gChannels)
{
BYTE* scan0 = pData;
BYTE* scan1 = pData + ((gWidth * gHeight * gChannels) - (gWidth * gChannels));
for (unsigned int y = 0; y < gHeight / 2; y++) {
for (unsigned int x = 0; x < gWidth * gChannels; x++)
{
BYTE temp = scan0[x];
scan0[x] = scan1[x];
scan1[x] = temp;
}
scan0 += gWidth * gChannels;
scan1 -= gWidth * gChannels;
}
if we are having image buffer, how can we flip it using code ?
This is being done by the following code...
Solution :
------------
Here what we have done was...
ScanLine1- 0 - Scanwidth pixels
ScanLine2 - 0 to ScanWidth Pixels
upto
ScanLineN
we have to modify it as follows for Flipping the Image Data :
ScanLine N and its pixels...
ScanLine N-1 and its Pixels
upto
ScanLine 1
For Doing this, we have developed the following code :
void FlipUpDown(BYTE* pData,int gWidth,int gHeight,int gChannels)
{
BYTE* scan0 = pData;
BYTE* scan1 = pData + ((gWidth * gHeight * gChannels) - (gWidth * gChannels));
for (unsigned int y = 0; y < gHeight / 2; y++) {
for (unsigned int x = 0; x < gWidth * gChannels; x++)
{
BYTE temp = scan0[x];
scan0[x] = scan1[x];
scan1[x] = temp;
}
scan0 += gWidth * gChannels;
scan1 -= gWidth * gChannels;
}
Custom build for Registering a Filter at Compile time
For Registering the Filter using regsvr32 During compilation,
I have done the following :
1.Open the project-> properties ->Custom build tab
and set the
Description : "Registering Directshow Filter..."
commands : regsvr32 /c "$(TargetPath)"
echo regsvr32 exec. time > "$(OutDir)\regsvr32.trg"
Outputs : $(OutDir)\regsvr32.trg
I have done the following :
1.Open the project-> properties ->Custom build tab
and set the
Description : "Registering Directshow Filter..."
commands : regsvr32 /c "$(TargetPath)"
echo regsvr32 exec. time > "$(OutDir)\regsvr32.trg"
Outputs : $(OutDir)\regsvr32.trg
Subscribe to:
Posts (Atom)