Wednesday, 3 December 2014

computability

This is the very last part of our class. It's also the most abstract part, making it extremely hard to understand. There's many problems in our life and many of them can be solved by computer. (we devise an algorithm and let computer finish the job) However, there are some problems that can not be solved by using computer!( It really sounds crazy: what? Have you ever heard nothing is impossible?!) But unfortunately, it is impossible for some kind of problem. Danny give an example of such problems: halting-problem. That kind of problem is not computable because there's no way to devise how the function works.

Until now, I still don't quite understand the meaning for this problem. I've heard about a crazy problem named " p=np?" I found many concept in our lecture are also involved in this problem and it is regard as the crown of computer science. We must admit that we knew so little about algorithm so far and there's still long way to go to even attempt proving that problem. I learned so much from this course. It let me explore the amazing world of computer science and I believe it'll be a good start.

Tuesday, 2 December 2014

big-omega and big-theta

The only difference between big-oh and big-omega is that it's "f(x) <= cg(x)" in big-oh and it's " f(x) >= cg(x)" for big-omega. The strategy to prove is also pretty much alike. The big-theta is the combination of  the two properties above. I think this section is relatively easy in the whole course.

However, there's still a tough problem in this easy section, that is the notorious problem 5 in assignment 3. It asked us to prove or disprove that "for f, g(x) in functions(N->R>=0), f in O(g) or f in omega(g)." This assumption looked so reasonable at first that me and my partner both thought we'd have to prove it and spent many hours but came up with nothing. Finally we found that f(n): n if n odd, n^3 if n even, g(n): n^2 can falsify this argument because f neither in O(g) nor in omega(g).

Although this argument is not correct, I think if I add more precondition I can make it right. I guess that if I add f, g(x) are both monotonic and nonzero then this assumption will be right.(just a guess). I think it's really a good problem and I learned that we can never say anything for sure until we proved it, even if it looks pretty reasonable!

big-oh

Seems like we did so many maths stuffs, csc165 needs to get back to the world of computer science! When we want to write a program to solve some specific problems on computer, we need to develop an algorithm first. For the same problem there will be different kind of algorithms and their complexities are also
quite different. The complexity of an algorithm is decided by the number of steps that the function will take to reach an output. Different steps will be taken for different inputs so sometimes we just take the worst situation. In the sight of computer scientists, there is no difference between some kind of functions for the number of steps, for example n^2 and 6n^2 + 2, because if n is very large the functions behave in almost the same way.They all belong to some thing pretty weird called O(n^2) and therefore a new concept big-oh is introduced.

The definition of big-oh is like this: f(x) in O(g(x)) is equivalent to " there exists a c>0, there exists a natural number b, for all natural number n, if n >= b then f(n) >= cg(n)." Because we can pick some certain b and c at first, to be honest I didn't face any problem in this section. Everything seems to be fine at least in this part.
All we have to do is to make f(x) <...<...<...<cg(x) and thus we complete the proof.

proofs about limits

Proof about limit is a hard type of questions we prove. Briefly, limf(x) (x->a) = l is equivalent to "for all e > 0, there exists a d > 0, for all x in real numbers, if |x - a|<d then |f(x) -  l|<e". Because it states that for all e > 0 in the implication, we need to write d in form of e while we pick the d, and that is the most difficult part in the proof.

Sometimes it is very easy to choose d for a certain type of function ( such as f(x) = x). However, sometimes we'll have trouble when picking appropriate d. Danny gave an example about proving limx^2(x->3) = 9, and I don't want to repeat the whole process here. In the class, Danny used a very clever way to simplify the process of proving. When rounded |x^2 - 3^2| all the way down to < d(d+6), Danny made full use of the precondition that we can actually pick d whatever we like to. At this step he supposed all the d he pick is less than 1, so he left the left d and turn the make d(d+6) < 7d (#d<1), so to make 7d less than e we just need to pick d: min(e/7, 1). In this way, we won't have to solve the d(d+6) = e to get root and pick d and this largely simplified the process.

I really benefited from this lecture because it solved my question about the problem in maths courses again!(What exactly is the min(.......) about??) After the class I asked Danny how he came up with this method and he told me that it required a little bit of intuition. He said that intuition is required to raise an assumption, to construct the basis of an idea or to find an easier way to the solution, and then the logic follows up to strictly and objectively test the idea and the method, to make sure the intuition leads to the right direction. Actually, we need to combine both intuition and logic to thrive in the domain of science. I learned a lot from his words. 

proofs

It took us some time to get to write an actual proof.

There are many kinds of things to prove: in the form of con/disjunction or implication. If we want to prove something in con/disjunction, then we have to break it in different cases, to discuss all the situations to complete the proof. The most direct and clear way is to prove something in implication. First we need to assume the prerequisites ( like the set of our objects ). If it is something like "for all", then we have to " assume" it because this is the precondition. Every time we assume something we need to remember to indent on next line. We also need to assume the antecedent of the implication and then we try to lead it one way down to the consequent to complete the proof. The most important process is to connect the antecedent with the consequent. Finally we conclude all the preconditions we've assumed and the whole proof is done.

The most interesting ( also the most difficult ) part of a proof, as to me, is picking some thing for "there exists". At the beginning of the proof we'll usually construct a line like "pick......" and Danny suggests us to leave it blank and go on with the proof! This sound crazy but actually it's a very helpful strategy because at the very beginning of a proof one can barely know what kind of value to pick for a variable. Thus we leave the line blank and during the proving process we can figure out what kind of value to pick and fill in the blank.


all the prerequisites for writing a proof!

After we learned how to tell things in a totally new language ( the logical language), we started to learn its properties: conjunctions, disjunctions, negations. The fundamental connection between con/disjunction is mentioned in the last slog, which is, "not p or q" = "p implies q". It is indeed a very fascinating concept. Using this property, I deduced the reason why the contrapositive is telling the same thing as the original implication.

If  the origin is " p implies q", then the contrapositive is, by definition, "not q implies not p". If we transfer both to the form on con/disjunction, then the origin says the same thing as " not p or q" and the contrapositive says "q or not p", and that shows the reason for them to be equivalent.

 Fortunately, some properties and laws like De Morgan's Law and transitivity are pretty friendly. ( They didn't bother me at all!) We learned how to actually draw the truth by using venn diagram or tabulate the truth. Using these method all the properties we learned in the class can be proved and expressed in a clear way.

Some of the very last things we did before proving was the definition of limit. Limit is a very abstract concept and it is really hard to understand and even harder to write down. I've already learned something about limit in maths courses but to me it's nothing more than a vague concept. Ironically, it was in the computer science course that I completely understood the definition of the limit and how to actually prove limf(x)( x to a) = p.
The limit is just saying that if we make a number a extremely close to a value than we can make f(a) extremely close to another value. Like if we want to say lim n^2 goes to infinity, we define it as: for all c in real numbers, there exist a b in real numbers such that if n is larger than b n^2 is larger than c. This is, not exaggerating, an ingenious definition(at least by my point of view), because it let us pick any real number and not matter how large the real number we pick, we can always find n^2 larger than c, that ensures the function goes to infinity. However, if we switch the first two item: there exists a b, for all c, then the definition is not saying the same thing because now b can not rely on c anymore.

getting used to logical language

The first thing we need to do to thrive in csc165 is to try our best to get used to logical language. The first day I saw these mysterious symbols I felt that I'm done because I could barely accept this kind of language. Translating English to logical language is even harder. After 2 weeks' struggle I finally achieve some extent of progress.
At very beginning, Danny introduced a concept:
"for all p in D, f(p)" #sorry I don't know how to put logical symbols on the slogs.
f(p) is some kind of property that all p in D have in common.
It seemed pretty easy, however, I felt a little bit confused about the comma. What's the meaning?! I've been thinking: what if I change the comma to imply? Will that be equivalent? I went to Danny's office hour and csc165 help center and learned that the comma simply means "such that". The whole sentence is to state a property that all p in D have in common. I started to think that if I can use another way to understand it better: when we are stating a kind of property in this way, to some extent, it is also an implication! ( if p is in D, then f(p)). 
Speaking about implication, it is formed by an antecedent and a consequent. If the object meets the antecedent and  satisfies/ not satisfies the consequent, then we can tell the implication is true/false. However, it is really hard to believe that if the antecedent itself is wrong, then whatever the consequent is, the implication is always true! For this part it really took me so long to totally understand. This is, in fact, what called vacuous truth. I asked Danny and he said: if you want to falsify an implication, then you'll have to pick an object that meets the antecedent and dissatisfies the consequent. If the antecedent itself can never be met, then it means that you can never pick a counterexample to falsify the implication, and any implication that can not be falsify is, unexpectedly, true! Although this is really hard to understand, it does make sense to me and help me figure out this concept. From the progress of understanding vacuous truth, I also learned that it is equivalent to express "p implies q " as " not p or q", and to negate a implication we need to verify " p and not q". These concepts are actually saying the same thing.