1. T/F: a. "S -> a S b | a" is equivalent to "S -> a S b | E, E -> a" b. A perceptron can be trained to compute any function of two binary inputs. c. For perceptron learning, larger values of lambdas will cause smaller changes in the weights. 2. Each of the following attempt to make a 2 by 2 matrix of 1s. State for each one whether we would have aliasing problems, i.e. running m[0][0] = 10 after constructing the matrix would change more than one entry. a. m = [[1, 1], [1, 1]] b. m = [[1, 1]]*2 c. m = [] for i in range(2): m.append([1, 1]) d. row = [] for i in range(2): row.append(1) m.append(row) m.append(row) e. m = [[1]*2, [1]*2] 3. Write a CFG that represents the language: a. of all words of a's and b's that end in two b's b. of all words of a's and b's containing "ab" repeated some number of times, e.g. ab, abab, ababab, ... c. of all words of a's and b's with only one a. 4. Given the following perceptron, fill in the table below for the output from this perceptron for the inputs. a -> 1 ----| V b -> -1 -> in ^ c -> 0.5---| T = -0.5 a b c ----- 0 0 0 0 0 1 0 1 0 0 1 1 1 0 0 1 0 1 1 1 0 1 1 1 5. Below is the perceptron update rule: w_i = w_i + \lambda * (actual - predicted) * x_i For the following perceptron: a -> 1 ----| V b -> 0 --->in ^ 1 -> -0.5 -| a. What would be the weights if we trained on the following example with lambda = 0.5 (assuming a threshold of 0): a b actual 0 0 -> 0 b. What would be the weights if we training on the following example with lambda = 0.5 (assuming we're starting with the original network NOT the network after running the previous example through): a b actual 1 1 -> 0 6. I didn't write a sample problem on writing classes in python, but make sure you're comfortable writing them (hopefully you should be at this point!)