One easy solution is to use the write command from scilab. Define the following function:

```
function pstrickswrite2d(filename,data_matrix)
//Author: Panagiotis Chatzichrisafis
//Date: 25.01.2015
//Description:
//The function pstrickswrite2d(filename,data_matrix) has two inputs:
//
```: The path and the filename which shall be used to write
// formated output for PSTricks
//: The matrix containing the data to be written into the file provided by
// argument.
//The output written to the file has a format which is suitable for
//the TeX PSTricks package fileplot or dataplot commands.
_num_decimal_places_=5;
file_desc=file('open',filename,'unknown','formatted');
_max_num_ = max(abs(data_matrix));
_num_integer_digits_ = ceil(log10(_max_num_));
_num_format_(1) ='F';
// add number of integer digits and decimal digits plus 2 for sign information
_num_format_(2) = string(_num_integer_digits_+_num_decimal_places_+2);
_num_format_(3) ='.';
_num_format_(4) = string(_num_decimal_places_);
_num_format_ =strcat(_num_format_);
_format_(1) ='( ''{'' ' ;
_format_(2) = _num_format_;
_format_(3) = ', '','' ' ;
_format_(4) = _num_format_;
_format_(5) = ' ''}'' )' ;
_format_=strcat(_format_);
write(file_desc,data_matrix,_format_);
file('close',file_desc);
endfunction

and load it into the scilab workspace.
]]>(1) |

where

(2) |

for the process of Problem 4.1. We are informed that this estimator may be viewed as an averaged periodogram. In this point of view the data record is sectioned into blocks (in this case, of length 1) and the periodograms for each block are averaged. We are asked to find the mean and variance of and compare the result to that obtained in [2].

Thus taking the average periodogram of one sample section has no statistical advantage for the result of the estimator. ]]>

(1) |

If is real white Gaussian noise process with PSD

(2) |

we are asked to find the mean an variance of . We are asked if the variance converge to zero as . The hint provided within the exercise is to note that

(3) |

where the quantiiy inside the parenthesis is

We see that while the mean converges to the true power spectral density, the variance does not converge to zero. Thus the estimator is inconsistent. QED. ]]>

In my case i wanted to create Xcode projects out of the source directory. For this purpose i created a new path into the directory where i downloaded and extracted the opencv software (let’s call this place[...]GeneratorsThe following generators are available on this platform: Unix Makefiles = Generates standard UNIX makefiles. Xcode = Generate Xcode project files. CodeBlocks - Unix Makefiles = Generates CodeBlocks project files. Eclipse CDT4 - Unix Makefiles = Generates Eclipse CDT 4.0 project files. KDevelop3 = Generates KDevelop 3 project files. KDevelop3 - Unix Makefiles = Generates KDevelop 3 project files.[...]

mypath_to_opencvwhere

mypath_to_opencv/opencv-2.4.7is the path to the opencv-2.4.7 sources). Start your terminal and type:

With the last command cmake is told to generate Xcode projects out of the files at the location were opencv-2.4.7 was extracted. The Xcode projects will be generated into the path where we called cmake. In the above mentioned case this path is mypath_to_opencv/xcode. If you only want to use the libraries for using the OpenCV API you may like to read Tilo Mitras article about the procedure, while i still suggest that the easiest approach is using macports. Now let’s go back to our Xcode generated build. If you look through your build tree you ‘ll find several Xcode Proejects. The top level of your build tree (xcode) includes for example mypath_to_opencv/xcode/OpenCV.xcodeproj . The samples directory, in case the sample build has been enabled in ccmake, has the Xcode project file mypath_to_opencv/xcode/samples/c/c_samples.xcodeproj In my case the standard cmake configuration had activated the OCL dependency which caused during the build in my systems problem because i have not installed any OCL driver on my system. I had to reconfigure the system and deactivate OCL by the command (yes the command below ismkdir xcode cd xcode cmake -G Xcode mypath_to_opencv/opencv-2.4.7

and switching OCL off cmake variable WITH_OPENCL. So now i was ready for experimentation. The error message i got when building and running OpenCV XCode examples that were accessing a camera was “QTKit didn’t find any attached Video Input Devices !”. The same error was reported in a post by karlphillip at stackoverflow. Unfortunately i couldn’t fix my problem with the solution provided there. Debugging didn’t provide any clues why this was happening and somehow i came across a post by Andrew Janke on stackoverflow which gave me the right momentum. The Macam version that i am using (macam-cvs-build-2009-09-25) could be build without any changes for 32 bit system while the 64 bit build failed. OpenCV (2.4.7) on the other hand is generated with 32/64 bit architecture support when running the previous mentioned cmake commands. It is possible to reconfigure the Xcode project by changing the architecture option in ccmake and regenerate the xcode build directory. Trying to built OpenCV with 32 bit support failed on my system due to incompatible libraries that were installed on my machine. The following libraries were only for 64 bit architecture available:ccmake -G Xcode/opencv-2.4.7

- libdc1394.dylib
- libavcodec.dylib (part of ffmpeg)
- libavformat.dylib (part of ffmpeg)
- libavutil.dylib (part of ffmpeg)
- libswscale.dylib (part of ffmpeg)
- libbz2.dylib

the ensemble ACF and the temporal ACF as , where the ‘s are all uniformly distributed random variables on and independent of each other. We are also asked to determine if this random process is autocorrelation ergodic.

As in exercise [2] we can use the trigonometric relation [3, p. 810] :

(1) |

(, ) to further simplify the expression for the ensemble autocorrelation function:

(2) |

Having obtained the ensemble autocorrelation function we can proceed to obtain the temporal autocorrelation function, which we hope will be a good approximation to the ensemble autocorrelation function . By definition the temporal autocorrelation function is given by

(3) |

The second sum of the previous relation can be simplified by one of the derived formulas of [2], for :

Thus the temporal autocorrelation function may be expressed, as long as as:

(4) |

In order to simplify the relation further, especially the first sum, we will again use (1), with and , and thus :

(5) |

We see that both distinct parts are of the form , and thus it remains to simplify this relation. Again using a trigonometric formula [3, p. 810]:

(6) |

with and we can simplify the relation as:

Furthermore it was shown in [2] that , for , so we can further simplify the previous relation by:

(7) |

Thus finally we can simplify (5) by:

(8) |

So finally the temporal autocorrelation (4) can be reduced using (8) to the following formula:

(9) |

Rearranging the terms on the right we see that the temporal autocorrelation function equals the ensemble autocorrelation function with an additional error , which we have derived for the the case when :

At once we see again that when that the error goes to zero . Thus the random process of the sum of sinusoids is also autocorrelation ergodic as long as . It is also easy to recognize, by using the argumentation of the previous exercise [2] , that this is not true if does not hold. In this case parts of the the error of equation (3) are proportional to as was already observed in [2] . QED. ]]>

as . As a second step we are asked to determine if the random process autocorrelation ergodic.

The previous relation can be furthermore simplified by the property of trigonometric functions [3, p. 810] :

(1) |

and thus the autocorrelation can be written as:

Again it is possible to further simplify the previous relation, this time by using the following trigonometric formula [3, p. 810]:

(2) |

Because the sinus function is odd the symmetric sum equals zero. Generally the same is not true for the cosine sum. For for example the sum equals and thus even for the temporal autocorrelation function will be in error to the true autocorrelation function. In general it is true that . The error to the true autocorrelation function is equal to , and thus the process is in general not autocorrelation ergodic, because as we have shown for e.g. the temporal autocorrelation doesn’t even converge for large . That is:

While it is true that there are frequencies for which the temporal autocorrelation doesn’t converge, there are also cases when the process is autocorrelation ergodic. To see this we have to further simplify the relation for the temporal autocorrelation. The sum can be further simplified, by using , and observing that the resulting sum is a geometric progression ([3, p. 7] ) with :

Using this identity the error to the true autocorrelation function can be rewritten as:

Thus by only imposing that the process is autocorrelation ergodic. ]]>

```
/home/User/.Scilab/scilab-version
```

If the file isn’t there just create one and call your scripts with the definitions to be loaded into the scilab workspace.

For example you might want to load definitions of physical constants into the workspace that are defined within ‘physical_constants.sce’. For this purpose add a line like the following into the .scilab file:

`exec('Users/User/Development/scilab/Base/physical_constants.sce');`

]]>is given by [1, eq. (3.60), p. 58]. For the case when is real white noise we are asked to what the variance expression does reduce to. A hint that is given is to use the relationship from [1, eq. (3.64), p. 59].

while [1, eq. (3.64), p. 59] is given by:

First we note that the mean of the sample mean converges to the true mean of the WSS random process :

From this we can derive the variance of the as:

The squared sample mean can be written as:

From the previous relation we can derive that the mean squared sample mean is given by:

(1) | |||

(2) |

In the previous relations the factors and denote that for the second summand, while for the third summand degenerates to zero. The three double sums can be further simplified:

(3) |

The second double sum can be written by:

(4) |

while similar to the previous derivation the third double sum can be simplified to:

(5) |

Using (3),(4) and (5) in (2) we derive the relationship:

(6) |

Observe that we could have replaced in (1) the sum by its equivalent form and applied relation [1, eq. (3.64), p. 59] of the hint. Then we would have come to the same result. In the approach taken in this solution we also derived the relation provided by the hint. Now relating the autocovariance to the autocorrelation function:

and replacing the autocorrelation in (6) by we obtain the variance of the sample mean as:

(7) |

Noting that [2, p. 125]: we can simplify the second part of the previous equation by:

(8) |

So finally we can rewrite the variance of the sample (7) mean as :

(9) |

which is the relation of the sample mean used in [1, eq. (3.60), p. 58]. QED. ]]>

(1) |

, where is uniformly distributed on , is WSS by finding its mean and ACF. Using the same assumptions we are asked to repeat the exercise for a single complex sinusoid

(2) |

and is thus independent of the sampling variable. Let the p.d.f of be then the ACF of the random process is given by

(3) |

From the trigonometric formula [2, p. 810 rel 21.2-12]:

(4) |

by setting and we obtain because :

Thus it is shown that the random process is WSS. Repeating the same for a single complex sinusoid we obtain for the mean that it is equal to zero and independent of :

(5) |

The autocorrelation of the complex sinusoid is obtained by:

which again is independent of and thus the complex sinusoid is also wide sense stationary. We see that in this case, the real part of the complex sinusoid equals the double of the autocorrelation of the real sinusoid. ]]>