Writing and Publishing Your Thesis, Dissertation & Research:
A Guide for Students in the Helping Professions by Paul Heppner and Mary J. Heppner Chapter 12: Conducting Quantitative Analyses and Presenting Your Results
Used when the research question is aimed at examining the frequency of a certain categorical or discontinuous variable (e.g., sex, race) or, more technically, the extent in which an observed or actual frequency count differs from the expected frequency count
Used when a research wants to compare the mean differences on a dependent variable [which should be a continuous variable (e.g., reading scores)] between two groups [i.e., the independent variable, which should be a discrete (or categorical) variable (e.g., sex)].
Used when a researcher wants to examine the mean differences of two or more levels of an independent variable on one dependent variable.
Chapter 13: Qualitative Results: The MeaningMaking Process
0 Comments
About the Course
Course Topics Vocabulary for design and analysis of algorithms Divide and conquer algorithm design paradigm Randomization in algorithm design Primitives for reasoning about graphs Use and implementation of data structures Vocabulary for design and analysis of algorithms E.g., “BigOh” notation “Sweet spot” for highlevel reasoning about algorithms Divide and conquer algorithm design paradigm Will apply to: Integer multiplication, sorting, matrix multiplication, closest pair General analysis methods (“Master Method/Theorem”) Randomization in algorithm design Will apply to: QuickSort, primality testing, graph partitioning, hashing Primitives for reasoning about graphs Connectivity information, shortest pahts, structure of information and social networks Use and implementation of data structures Heaps, balanced binary tree searches, hashing and some variants (e.g., bloom filters) Topics in Sequel Course Greedy algorithm design paradigm Dynamic programming algorithm design paradigm NPcomplete problems and what to do about them Fast heuristics with provable guarantees Fast exact algorithms for special cases Exact algorithms that beat bruteforce search Skills You’ll Learn Become a better programmer Sharpen your mathematical and analytical skills Start “thinking algorithmically” Literacy with computer science’s “greatest hits” Ace your technical interviews Excellent free reference: “Mathematics for Computer Science” by Eric Lehman Merge Sort: Motivation and Example *Good introduction to divide & conquer Improves over Selection, Insertion, Bubble sorts *Calibrate your preparation *Motivates guiding principles for algorithm analysis (worstcaseand asymptotic analysis) *Analysis generalizes to “Master Method” The Sorting Problem Input: array of n number, unsorted Output: same numbers, sorted in increasing order Merge Sort: Example Recursive algorithm 54187263 1 – 5418 > 1458 2 – 7263 > 2367 1 <> 2 ===MERGE==== 12345678 Merge Sort: Pseudocode Recursively sort 1st half of input array > merge two sorted sublists into one Pseudocode for Merge: C = output array [length=n] A = 1st sorted array [n/2] B = 2nd sorted array [n/2] i = 1 J = 1 for k=1 to n if A(i) < B(j) C(k) =A(i) i++ else [B(j)<A(i)] C(k) = B(j) j++ end Merge Sort Running Time? Key Question: running time of MergeSort on array of n numbers Running time ~ #of lines of code executed] Running Time of Merge Upshot: running time of merge on array of m numbers is <= 4m+2 … <= 6m Claim: Merge Sort requires <= 6 log2n + 6n operations to sort n numbers 2 * 2 * 2 =8 == 2**3 = 8 log2(8) = 3 Lesson: Merge Sort: Analysis Proof of claim (assuming n = power of 2) [will use “recursion tree”] The tree has the bottom recursive tree value: (log2n +1) to be exact base cases = singleelement arrays 2*j and n/2*j > number of subproblems at each level = j =0,1,2,.., log2, there are 2*j subproblems, each of size n/2*j Total # of operations at level j: [each j =0,1,2,..log2n] <=2*j 6n > independent of level j Total <=6n(log2n+1) + 6n Running Time of Merge Sort: Claim: For every input of n numbers, Merge Sort produces a sorted output array and uses at most 6nlog2n+6n operations Guiding Principles for Analysis of Algorithm Guiding Principle #1: “Worstcase analysis”: our running time holds for every input & length (particularly appropriate for “generalpurpose” routines) As opposed to “Averagecase” analysis (requires domain knowledge) Benchmarks (requires domain knowledge) Guiding Principle #2: Won’t pay much attention to constant factors, lowerorder terms Justifications: 1 – way easier 2 – constants depend on architecture /compiler /programmer anyways 3 – lose very little predictive power Guiding Principle #3: Asymptotic analysis: focus on running time for large input sizes n. E.g. 6nlog2n+6n (merge sort) “better than” 1/2n**2 (insertion sort) Justification: analysis: only big problems are interesting! What is a “Fast” Algorithm? Adopt these 3 guiding principles: Fast algorithm – worstcase running time grows slowly with input size. “Sweet Spot”: mathematical tractability and predictive power Holy grail: linear running time (or close to it). II. ASYMPTOTIC ANALYSIS The Gist Importance: vocabulary for the design and analysis off algorithms (e.g., “bigoh” notation)
Asymptotic Analysis Idea: suppress constant factors and lowerorder terms Lowerorder returns become irrelevant for large inputs Constant factors – too systemdependent Example: equate 6nlog2n + 6n with just nlogn Terminology: running time is 0(nlog2) == [“bigoh” of nlogn] Where n = input size (e.g., length Example: One Loop Problem: does array A contain the integer t? Given A (array of length n) And t (an integer) For i= 1 to n If A[i] ==t return True Return False Question: What is the running time?
Example: Two Loops Given A,B (arrays of length n) And t (an integer) For I = 1 to n If A[I] == t return True For I = 1 to n If B[i] == t return True Return False Question: Running Time?
Example: Two Nested Loops Problem: do arrays A,B have a number in common? Given arrays A,B of length n For I =1 to n For j =1 to n If A[I] == B[j] return True Return False Question: Running Time?
Example: Two Nested Loops (II) Problem: does array A have duplicate entries? Given array A of length n For I = 1 to n For j = I + 1 to n If A[i]==A[j] return True Return False Question: Running Time? O(n**2). Correct. Quadratic time algorithm Why Study Algorithms?
*Important for all branches of computer science *Plays a key role in modern technological innovation *Provides novel ‘lens’ outside of computer science and technology quantum mechanics, economic markets, evolution *Challenging (i.e., good for the brain!) *Fun Integer Multiplication Input: two ndigit numbers x and y Output: the product x.y “Primitive Operation”: add or multiply 2 singledigit numbers The GradeSchool Algorithm Upshot: #operations overall <= constant * n**2 The Algorithm Designer’s Mantra “Perhaps the most important principle for the good algorithm designers is to refuse to be content.” Can we do better? Karatsuba Multiplication Example: X = 5678 Y = 1234 56 = a 78 = b 12 = c 34 = d Step 1: Compute a * c = 672 Step 2: Compute b * d = 2652 Step 3: Compute (a+b)*(c+d) = 134*46 = 6164 Step 4: Compute (3) – (2) – (1) = 2840 Step 5: Pad (1) with 4 zeroes at the end * (2) * pad (3) with 2 zeroes 6720000 * 2652 * 284000 = 7006652 A Recursive Algorithm Write x = 10(n/2)a + b and y = 10(n/2)c + d Where a, b, c, d are n/2 – digit numbers [example a =56, b =78, c=12, d=34] x*y = (10(n/2)a+b) (10(n/2)c+d)= 10(n)ac + 10(n/2)(ad+bc) + bd (*simple base case omitted) Idea: recursively compute ac, ad, bc, bd, then compute (*) in the straightforward way Karatsuba Multiplication: x * y = 10(n)ac + 10(n/2)(ad+bc) + bd Step 1: Recursively compute ac Step 2: Recursively compute bd Step 3: Recursively compute (a+b) (cd) = ac + ad + bc + bd Step 4: (3) – (1) – (2) = ad + bc Upshot: Only need 3 recursive multiplications! (and some additions) I. INTRODUCTION (Week 1)

AuthorVitali Kremez Archives
November 2015
Categories 