Project Presentation Homework Help

Comments About the SYST 469 Projects and Presentations (Spring 2018, M)

 

  • Each two-person team will have 10 minutes for your project presentation (in PowerPoint) and questions.

 

  • Two extra points (out of the total 100 for the class) will be given to student teams who present their project on April 23rd. So, a high A could be worth 27 instead of 25 points.

 

  • Please start volunteering for the April 23rd date via email. Please be reasonably sure that you will be ready to present on that date before you volunteer. You have up to April 16th to change your mind so that another team can take your place. If you volunteered but can’t present on April 23rd, you will lose 2 points if you prevented another team from presenting. I’ll give date priority to students who need extra points after the second exam.

 

  • Each team will perform an experiment containing a usability test for two products. Your experiment will be guided by hypotheses identifying expected differences in performance on usability goal(s) (e.g., efficiency) and user-experience goal(s) (e.g., satisfaction). Remember, you are measuring usability. So, don’t have tasks like how long it takes for a device to come on or go off.

 

  • You must have one objectively measured usability goal (e.g., time measuring efficiency.) In addition, I’d like to know your usability goal requirement level for each task and the rationale for it. Remember, you’re simulating the process of establishing requirements. So, your usability goal requirements should represent “requirements,” not the average scores on your pilot test.

 

  • I’d also like to know your user-experience goal requirement(s). For example, this may be an overall mean satisfaction score significantly better than the neutral point on a 5-point scale or, perhaps, that at least half of your participants say that, overall, they had a positive experience using the product. In addition, if you’d like, you may have specific user-experience goal requirements for each task or different types of questions you ask your participants.
  • If you want, you can turn your two-group (products) experiment into a factorial design (two or more independent variables) by defining two different groups of participants (e.g., on the basis of age, gender, computing experience or some other independent variable dealing with people’s characteristics). You do not have to do this; it’s up to you.

 

  • “Task” is actually an independent variable. The number of tasks defines the number of conditions (or levels) on the “task” independent variable. So, you actually have a factorial design. You do not have to worry about tasks as an independent variable or about factorial designs unless you choose to do so.

 

  • Remember, we’re expecting you to implement good experimenter control over persons, procedures, and measurement when you perform your experiment, and to demonstrate that you have done so when presenting your project.

 

  • 10 participants are enough; that is, 5 participations per team member for a two-person team. Each team member should be involved in testing each device to remove bias. More participants are better for experiments because I’m asking you to perform statistical tests for hypotheses. However you do not have to have more than 20 participants. I realize this is just a class project and you have other classes.

 

  • Have your participants voluntarily sign a short informed consent form before they participate in your project. Make sure to tell your participants (1) the general goal of your project; (2) what they’ll be doing; (3) that there are no risks; (4) that their responses will be kept private; and (5) that they can stop participating at any time without giving any reason. Your participants are doing you a favor, so please treat them accordingly. [Note: I do not want you conducting any experiment that has any possible physical or emotional risks to your participants.]

 

  • To ensure good control, perform a pilot study with 2 to 4 participants, depending on whether you’re using a within-subject or between-subject design, respectively.

 

  • Basic structure of your presentation:

 

  • Introduction
  • Overall Usability and User Experience Hypotheses (Null and Alternative) and Rationale for them
  • Tasks (and rationale for them)
  • Usability Goal Requirements (for each task and overall)
  • User Experience Goal Requirements (could be for each task or a set of questions or just one overall user experience question)
  • Method (how you did your experiment)
    • Experimental Design
    • Participants
    • Procedures
    • Pilot Test Results and Changes to Procedures
  • Results (what you found out based on your statistical tests) for
    • Usability goal requirements and hypothesis
    • User experience goal requirements and hypothesis
  • Discussion (general conclusion, reasons for your results, concerns, limitations, and proposed next steps, including possible suggested changes to the products)
  • Appendix
    • One copy of Questionnaire
    • One copy of Informed Consent Form
    • Data (all of it, but not the names of your participants)
    • Printouts for All Statistical Tests

 

  • Questions will not be asked until the team finishes their presentation. The only exception is that I can ask questions if I can’t understand the presentation. Also, I get the opportunity to ask the first question, if I want to do so.

 

  • At least one team member needs to upload your presentation into the Blackboard assignment folder containing the date for your presentation. Failure to do so will result in both team members losing 2 points. I expect you to use flash drives or web access (e.g., your Blackboard access, not mine) to make your presentations.

 

  • All students must present part of their team’s presentation to the class. And, please, practice your talks. You may lose points if you’re so confused that you don’t know your part of the presentation.

 

  • In addition, your team must give me a paper copy of your presentation when you present it, with no more than two (2) slides per page.

 

  • Provide enough information on your slides for me to remember what you did when I grade the projects in a week (or more) after you give your presentation.

 

  • But don’t present so much information that I can’t read your slides when you make your presentation. Feel free to use the Notes Page in PowerPoint when you give me the paper copy of your presentation, but you may have to print only one slide and notes page per page.

 

  • Remember to make sure that I can read your slides, and particularly your graphs – particularly if you give me a black-and-white copy of your presentation. I’ll be grading from your paper copy and only going to your digital copy if I need to do so.

 

  • I only need to see mean values (and possibly standard deviations) and the results of your statistical tests when you make your presentation.

 

  • Remember, the statistical tests depend on your:

 

  • hypotheses,
  • design (e.g., between- or within-subject for each of the independent variables in your experiment), and the
  • type of dependent measure (e.g., binary or continuous).

 

  • You’ll be using the results of your statistical tests to make your conclusions.

 

  • The stat tests will help you decide whether your data supports your hypotheses or not. The tests will not prove or disprove your hypotheses.
  • Remember there are both Type I and Type II errors. There also is a difference between statistical and practical significance. These issues should help you decide your alpha level (e.g., p <05) for deciding whether or not the data support rejection of your null hypothesis.

 

  • Figure out how you are going to analyze your data, including your questionnaire data, before collecting it.

 

  • You only need to test your hypotheses statistically, not each goal requirement for each product.

 

  • However, you must note whether or not your goal requirements were met, on average, by both products for each task. You may want to perform a statistical test if the mean performance for one or both products fail to achieve a goal requirement. (You also should note how many participants failed to reach the goal requirement level for each of your tasks.)

 

  • Remember to perform statistical tests for your subjective data (questionnaires) too!

 

 

 

  • You can use the statistical package of your choice. Here are some possibilities:

 

  • Excel
    • For example, for a t-test to test “time” data for two products
      • TTEST is under “Formulas, More Functions, Statistical” on the Excel menu bar
      • Use “paired” for a within-subject design and “two sample” for a between-subject, “two group” design
      • If you have a “two sample” design, first perform an F-test to conclude whether your two samples have equal or unequal variance (e.g., p <05)
    • Correlations (“CORREL”) and many other statistical calculations also can be found under “Formulas, More Functions, Statistical.” For example, you may want to determine if there is a significant correlation (i.e., relationship) between your usability and user-experience data.
    • I also have posted a set of screen shots that a student sent to me for enabling the Excel Data Solver, under Data on the menu bar, for doing the statistical analysis. It has the advantage of providing printouts for your statistical analyses.

 

  • VassarStats: Statistical Computational Website (Google it)
    • For doing all the tests mentioned for Excel. In addition:
    • For testing proportions (e.g., of participants who made an error performing a task)
      • against goal requirements (e.g., under “Frequency Data,” then “Binomial Probabilities”) or
      • for two products (under “Proportions,” then “Significance of Difference Between Two Independent Proportions” for a between-subject design [or “correlated” for a within-subject design])
    • Single-Sample t-test for testing data for continuous variables (e.g., mean time) against a goal requirement
    • Analysis of Variance (ANOVA) for a simple factorial design (e.g., products by tasks) with continuous data
    • Categorical data tables for factorial design with proportions, including a correction if your expected cell frequencies are too small (under “Frequency Data”).
    • Note: It’s definitely okay if you just do a series of t-tests or proportional tests if you’re not familiar with tests for factorial designs. I do not want the statistics to get in the way of you doing your project. However, remember that a factorial design (e.g., ANOVA) tests for interactions and better controls for Type I errors.

 

  • Note if you use Minitab: To calculate if negative t-values have a significance level (alpha) less than or equal to 0.05 (i.e., p <05) for a two-tailed test in Minitab, use greater than 0.975 (i.e., 1 – 0.975 = 0.025) as your cutoff.

 

Grading: Project Errors for Which I Will Take Off Points

  • Confusing null and alternative hypotheses
  • Not telling me your user requirements
  • Regarding your tasks
    • Having a trivial set of tasks
    • Not providing the rationale for them
  • Regarding your usability goal requirements and your user-experience goal requirements
    • Not having any
    • Not indicating whether the requirements are met
      • On average, and
      • How many participants failed to meet the requirement for each task with each device (and, if appropriate, for each question for user experience)
    • Regarding your method
      • Failing to make it clear that you used good experimenter control regarding participants, procedures, and measurement
      • Failing to randomly assign participants to conditions for a between-subject design or use randomized counterbalancing for a within-subject design
    • Regarding your pilot study
      • Not doing one
      • Only doing one to obtain usability goal requirements instead of doing it to find and fix problems with your testing procedures
      • Not telling me what changes you made to your testing procedures
    • Regarding the presentation of your results
      • Not indicating your task or question names when presenting your results
      • Not indicating which product performed significantly better based on statistical tests (e.g., p <05)
    • Regarding your statistical tests
      • Not having any
      • Doing them wrong: for example
        • You can’t have p > 1.0
        • Doing a “two-sample” t-test on Excel when you have a within-subject design or “paired” t-test when you have a between-subject design
      • Not having your statistical printouts in the appendix so that I can make sure that you did your tests correctly
      • Not ensuring that your slides correctly present the information in your printouts
    • Regarding your conclusions
      • Not using both your statistical analysis and your requirements analysis to make your conclusions
      • Making the wrong conclusions (e.g., saying that you could not reject the null hypothesis when you could or vice versa)
    • Not presenting enough information (or thinking critically) when your present your discussion/limitations
    • Other things:
      • Giving a poor presentation
      • More than 2 slides/page
      • Failing to upload your presentation

 

Please follow and like us: