Skip to content

preparing for kiddraw_tracing_eval launch todo's #2

@judithfan

Description

@judithfan
  • Construct comprehensive dataframes for tracing data: exactly same attributes as those for ordinary object drawing trials
  • Subset & modify this dataframe in order to generate a stimulus dictionary for the rating task: remove the clunkiest of the columns (e.g., SVG, PNG string data, perhaps) and add image_url, experiment name & version, number_rating_levels = 5, lower_bound = “poor”, upper_bound = “excellent”
  • Upload data
  • Change the color of the reference shape in the tracing
  • Upload the overlapping reference+tracing PNGs to Amazon S3 so that they have public URLs
  • Upload the stimulus dictionary to our mongo ‘stimuli’ db so that it exists and can be retrieved by the rating task
  • Conduct all testing and task development on server in tmux session called 'tracing_eval', edit wherever the mongo port == 6000, change it to 6002 so that it doesn't complain.
  • Mock up prototype of the rating task interface on placeholder sketch for single trial
  • Configure setup.js now to retrieve actual data from database
  • Test prototype
  • Experimental design questions: specifically, how many trials per HIT?
  • Setting up nosub, etc. to actually post HITs and download data from AMT using LangCog lab account
  • run small pilot
  • set up preliminary analysis pipeline on small pilot to make sure we are saving all the data we need
  • scale up to collect k ratings on N tracings, for some reasonable k and N that we will decide on...

To switch from development mode to production mode, always follow these steps:

  • js/setup.js: Make sure the number of trials is 105, not 20.
  • js/jspsych-image-button-response.js: Make sure that the iterationName in the trial_data object in the plugin is the name of the current experiment.
  • app.js: Make sure that the stimulus database/collection that you're pulling from is the one without the _dev suffix at the end.
  • Whenever making any changes to the task, test HIT submission on AMT Sandbox first.
  • Test the task to make sure the data is being written out properly by checking your python analysis pipeline.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions