Tuesday 2 December 2014

Problem in my blogger

Somehow some of the blogs that I wrote a while ago cannot be showed properly, just like the ones below, although it is perfectly fine when I reverted it into draft. 




So I just deleted the original blogs, and wrote the new ones, thus the dates for some blogs is not correct.

Last week of class

Today is November 30th, and it's the last week of class, and after that we have exams coming up. Looking back to the beginning of this semester, I think I really learnt a lot about computer, and especially the material from last week will stick on my mind for a really long time.

This week, I am mainly focusing on the last assignment for this course, which about proving and disproving Big O and Big Omega as well as the halting problem. The assignments were always quite hard to do compared to the tests, quizzes and the material covered in class. However,I found this last assignment the hardest as we spent tremendous amount of time tackling down question 3, 4, 5 and 6 (almost every question in this assignment!). 

In this week's lectures, we talked about countable and uncountable functions. And we also learnt a new proving technique called induction, which has two parts: base case and induction step. In the base case, we prove that P(0) or P(k) is true. Then in the induction step, we assume P(n) is true and try to prove P(n+1) is also true. This is a really powerful and wise-used proving technique in mathematics. 

Since we will finish this course in a few days, so I would like to say some words: this course is a quite practical and useful course for computer science and I learnt a lot during this semester. We also have great TAs, professors and classmates, and I would like to thank them for their efforts and wish them all the best in the future!

Week 11: Halting Problems

This week, we were introduced to two totally new ideas: halting problems and computabillity. These were first introduced and proved by Alonzo Church(who introduced Lambda calculus) and Alan Turing(who described Turing machine) before they even had a computer.


Then prof showed us one function collatz:
def collatz(n):
    #n is a natural number
    while n>1:
        if n%2 == 0:
            n = n/2
        else:
            n= 3*n+1
    return "Reach 1.."
Nobody on this planet know the answer that whether this function halts or just loops forever. Even mathematicians cannot do that -- they can only show that the function halts for n less than 2^58.
Then we turned to the function H(f, i), which takes in a function f and a variable i and output if the the function f(i) halts. For the proof of H(f, i), we need to use contradiction: If we assume that the function halts, then we will end up with the conclusion that it doesn't halt; if we assume that a function doesn't halt, then finally we conclude that it halts.

Finally, we learnt computability, which says that if f is a well-defined function, i.e. we can say what f(x) is for every x in the domain, but we can't say how to compute f(x), then f is noncomputable. And we know that computability has to do with halting to some extent as the example functions prof showed in class all have H(f, i) within the codes.

In conclusion, this week's topic is the most staggering and impressing part of this course so far because it showed me something I've never been exposed before and it is also different from the usual material taught in class like algorithms and other stuff. Although I still don't really grasp everything prof said about this topic in class, but I really found this interesting and I will keep working on it!

Week 10: Proofs of Big Omega

This week,we shifted our focuses gradually  from Big O to Big Omega. The proofs is kind of similar to that of Big O. And the main idea (and the hard part) of the proof is to find a magic breakpoint B and simplify the function without changing the highest degree, since the lower degrees are negligible compared to highest degrees and thus can be ignored. 

Also, prof taught us a way of determining if the functions satisfy the Big O or Big Omega by calculating the limit of the division of two functions as N goes to infinity. The limit will tell us the answer because it tells us the behavior of two functions near infinity and thus we can speculate the validity of f <= cg or f >= cg. 

Week 9: Proofs of Big O

This week, we had our second midterms and also second assignment due. What a busy week! Guess I will take a break from studying this weekend... Anyways, I think I did pretty well on the test as well as on the assignment. I personally think that the assignment is harder and it took me and my partner quite long to finish it, Question 2 is alright but Question 1 really got us, especially Claims 1.3 and 1.4. Because we never did this kind of proof: for example, we knew that Claim 1.4 was wrong however when we negate the statement, we didn't know how to prove x=y. Thanks to Google, and thanks to professor that we now know that we have to prove x >= y and x <= y in order to prove they are equal.

During this week's lectures, we continued talking about Big O and Big Omega (but mainly Big O) and we started learning the actual proofs of Big O. As I've mentioned before, we have to start from definitions for most of the proofs. So, for Big O, the key point is to find an c and an B that make satisfy the definition. The Big O proof is sort of like limit proof in which you will have to work backwards to find the right number in both cases. 

We've also learnt how to disprove the Big O statement, which is just proving its negation. However, the problem for me is not how to prove/disprove it but figuring out if the function satisfies the Big O or not.Therefore, I usually assume the function is Big O, and try to find an c and B that works,and if I can't, I will then disprove it. I think my technique is time-consuming,especially on a test, but I cannot find a better way to do it. If any of you know, please tell me! 

Week 8: Intro to Big O and Big Omega

In this week's lectures,we were introduced to the notation of Big-Oh and Big-Omega, which is merely the two assessment used to test how efficient an algorithm is. As I mentioned in last week's slog, it is extremely important for an programmer/software developer to know the running time and efficiency of his algorithms. 


Big O specifically describes the worst-case scenario, and can be used to describe the execution time required by an algorithm. O(N), for example, describes an algorithm whose performance grows linearly and in direct proportion to the size of the input data. O(N2), on the other hand, represents an algorithm whose performance is directly proportional to the square of the size of the input data. The more formal definition, mentioned by professor in class, for O(n^2) is that beyond a breakpoint B, f(n) is upper bounded by cn^2, where c is a constant multiplier.Big Omega, which is completely the opposite of Big O, is used to describe the best case running time for a given algorithm.

Sometimes, people also describe Big O and Big Omega as bounds: Big O represents the upper bound running time for an algorithm, which is also the “worst case” whereas Big Omega is used to represent the lower bound, which is also the “best case” for that algorithm.

Week 7: Running time

This week, we wrapped up the end of proving techniques and started talking about efficiency of different algorithms. To conclude the proof part, we learnt some inferences such as conjunction/disjunction elimination, existential instantiation, etc. These rules allow us to do proofs more efficiently by reducing some redundant steps. Then we moved on to calculating the running time for different functions, which refers to the number of steps that function takes. 

Running time of an algorithm is an essential part of computer science, because every programmer or software developer has to present an algorithm that's as efficient as possible. Since programmers has to deal with huge amount of data, so we don't want the algorithm produced takes forever to output the result. And to make sure that we -- as starting programmers won't run into that trouble, we will have to first learn how to count the running time of a function. In this week's lectures, we compared bubble sort and merge sort. And we input some specific numbers into the function to determine how many steps each function takes, and then conclude that which one is faster and more efficient. Then we compare the growth factor of each function, i.e. if we double the number of data of input, how will the running time of each function change? Will the ratio of the running time of two functions stay the same? And we found that some functions do better (faster) when there's big amount of data, while some functions run faster if the data is small and run slower and slower as the data gets larger and larger.