tag:blogger.com,1999:blog-81625318110956480512024-02-20T08:51:04.810-08:00Post doctoral blograshedhttp://www.blogger.com/profile/15573408824183969459noreply@blogger.comBlogger1125tag:blogger.com,1999:blog-8162531811095648051.post-32074933084114774842009-10-06T03:13:00.000-07:002009-10-06T05:39:57.862-07:00Probability for engineers and scientist<p>Just brushing through probab. and stats. Grabbed a nice <a href="http://www.amazon.com/Probability-Statistics-Engineering-Sciences-Devore/dp/0534372813">book</a> from the library. Studying some basic concepts such as random vars, joint probability distributions. </p><p>A few things that I need to understand are maximum likelihood estimation (MLE) and conditional probability leading to the Bayes' theorem which commonly occurs in modelling problems in imaging. </p><p>Today I have been reading on MLE and it is one of the most popular methods of point-estimation. Point estimation as I understood is to approximate, for example, mean of the population from the sample mean. So samples are taken from the pop. MLE was explained nicely <a href="http://statgen.iop.kcl.ac.uk/bgim/mle/sslike_3.html">here</a> using a coin experiment. Given a sample (lets say 52 heads and the rest all tails in a 100 tosses), what is the probability for heads P(H) that makes this sample most likely. So, if we set out with P(H) = 0.5 and computed the probability for 52H and 48T = 100!/(52! * 48!), we would see that the probability is lower than if we had set out with lets say P(H)=0.52. So maximizing the MLE function determines the optimal value of P(H). MLE functions are usually expressed as a product of probabilities. It is thus more convenient to maximize the log-likelihood, transforming the function to a summation. This is simply because multiplying small numbers each time results into smaller numbers difficult to represent.</p><p>The <a href="http://en.wikipedia.org/wiki/Expectation-maximization_algorithm">EM algorithm</a> is used for computing MLE. I need to yet read on EM and perhaps see how it is applied in Lorenzo-Valdes' thesis. Sometime ago I had borrowed a book on <a href="http://www.amazon.co.uk/Pattern-Classification-Second-Wiley-Interscience-publication/dp/0471056693/ref=sr_1_1?ie=UTF8&s=books&qid=1254832592&sr=8-1">Pattern classification</a> which has a chapter on EM. <br /></p><p>My main focus however today is not stats. It has been the graph cuts method for segmentation, which is commonly used in medical image segmentation apparently. I have yet to come in grips with how the method works. </p><p><br /></p>rashedhttp://www.blogger.com/profile/15573408824183969459noreply@blogger.com0