Skip to content

Conversation

@akash-das2000
Copy link

No description provided.

akash-das2000 and others added 25 commits February 27, 2024 15:38
Changed the data_path and image_folder paths
Changed the path for train_mem.py to absolute path
uncommented the PROMPT_VERSION AND MODEL_VERSION
changed path names for model_name_or_path and output_dir
Added --mm_projector_type mlp2x_gelu \ as per wandb blog and commented the pretrain_mm_mlp_adapter line
changed the pretrain_mm_mlp_adapter path to suit the llava-v1.5-7b path in home dir and also removed the --mm_projector_type command
mm_vision_select_layer commented
Removed comments, was second guessing that might be issue of recurrent command not found prompts
Commented the commands not found
Stupid mistake of missing a "\" after pretarin_mm_mlp_adapter
code spacing typos
Changed the file paths for the v1.5 script
Reverted the finetune_lora.sh file to og
@visanth-techconative
Copy link

  1. The input and output are not apparent.
    python LLaVA_InitialJson.py ~/LLaVA/data_prep/Sketch2Code_og/data ~/LLaVA/data_prep/Sketch2Code_og

    Use this as reference:
    python scripts/prepare_ui_gen_data.py --csv_path test_split.csv --destination_path $MODEL/data/csv --checkpoint_dir...

    • In the above command, we can understand that we are inputting a csv path which contains the test_split.csv file and the output is a destination path which is also going to be a csv file.
  2. Running the script from the home dir for convenience:
    python LLaVA_InitialJson.py ~/LLaVA/data_prep/Sketch2Code_og/data ~/LLaVA/data_prep/Sketch2Code_og
    python data_prep/LLaVA_InitialJson.py...

  3. Adding a wiki.

  • A wiki with a step-by-step guide on how to start working with the repo will be useful. The idea is that people can refer this and understand what it does; it should not be a necessity for them to understand how.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants