We started working on his statistics paper including making statistical calculations and inferences from a table of population data, using a pie chart properly, calculating missing sectors, and extrapolating conclusions from one period to another, creating a cumulative frequency step polygon from a table of data, calculating deciles, range, and probabilities, calculating the chain base index from a table and drawing inferences, reinterpreting a frequency histogram that was drawn incorrectly, drawing it correctly, and describing it properly. We then worked on his general math problems to get him ready for his test.
We worked on his paper for statistics including writing hypothesis, determining what kind of sampling was being used, creating categories of the data given, writing experimental questions, determining the best data collection method, interpreting a Venn triple diagram, discussing inter-observer bias, determining possible extraneous variables, interpreting a table of demographic data, etc.
We finished a problem set including pythagorean theorem, area of a trapezium, trig functions SOH-CAH-TOA, volume of a prism, volume and curved surface area of a cylinder, circumference and area of a circle, using the order of operations (PEMDAS in the US, BIDMAS in the UK), arithmetic series, congruency of two polygons, obtuse angles, lines of symmetry, area of a parallelogram, vertical angles, reflex angles, isosceles triangle relationships, range, median, mode, exponentials, prime numbers, cube roots, solving linear equations, unit conversions (used unit analysis), simplifying algebraic expressions, probability, calculating percentages and percent increase, projections and rotations, solving simultaneous equations, calculating the modal class of a frequency distribution and the total amount, solving a compound inequality and graphing it, etc.
We did the problems in his statistics problem package including interpreting correlation coefficients, calculating average annual prices, calculating the chain base index, calculating the percentage increase, calculating the geometric mean, interpreting data for unemployment rates for several years, creating a cumulative frequency step polygon, determining the differences between random, simple random, stratified, and cluster sampling, etc.
We worked through several of the problems including completing a scatter plot and drawing a line of best fit, calculating the mean, using the best fit line to extrapolate predicted values and giving the reasons why this method could be unreliable, using and interpreting a choropleth map, formulating hypothesis, formulating survey questions from the hypothesis, determining whether data is quantitative or qualitative or primary or secondary, discussing the advantages and disadvantages of the different kinds of data, given data calculating the mean, median, quartiles, variance, and standard deviation, comparing means and standard deviations from two data sets.