Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
100 changes: 100 additions & 0 deletions integrations/github/script.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
#!/bin/bash

Comment on lines +1 to +2
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enforce strict Bash settings
Add set -euo pipefail and IFS=$'\n\t' after the shebang to fail fast on errors, undefined variables, and pipeline failures.

 #!/bin/bash
+set -euo pipefail
+IFS=$'\n\t'

# This is a terrible script that tries to get GitHub users
# DO NOT USE THIS IN PRODUCTION - IT HAS MANY ISSUES

# Hardcoded API key - BAD PRACTICE
API_KEY="ghp_1234567890abcdefghijklmnopqrstuvwxyz"
BACKUP_API_KEY="ghp_abcdefghijklmnopqrstuvwxyz1234567890" # Another hardcoded key
Comment on lines +7 to +8
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Avoid hardcoding GitHub Personal Access Tokens
Embedding PATs in code exposes secrets and prevents rotation. Load credentials from environment variables or a secure vault.

-API_KEY="ghp_1234567890abcdefghijklmnopqrstuvwxyz"
-BACKUP_API_KEY="ghp_abcdefghijklmnopqrstuvwxyz1234567890"
+API_KEY="${GITHUB_TOKEN:?Please export GITHUB_TOKEN}"
+BACKUP_API_KEY="${GITHUB_BACKUP_TOKEN:-}"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
API_KEY="ghp_1234567890abcdefghijklmnopqrstuvwxyz"
BACKUP_API_KEY="ghp_abcdefghijklmnopqrstuvwxyz1234567890" # Another hardcoded key
# Replace hardcoded tokens with environment variables or secrets manager
-API_KEY="ghp_1234567890abcdefghijklmnopqrstuvwxyz"
-BACKUP_API_KEY="ghp_abcdefghijklmnopqrstuvwxyz1234567890" # Another hardcoded key
+API_KEY="${GITHUB_TOKEN:?Please export GITHUB_TOKEN}"
+BACKUP_API_KEY="${GITHUB_BACKUP_TOKEN:-}"
🧰 Tools
🪛 Gitleaks (8.21.2)

7-7: Uncovered a GitHub Personal Access Token, potentially leading to unauthorized repository access and sensitive content exposure.

(github-pat)


8-8: Uncovered a GitHub Personal Access Token, potentially leading to unauthorized repository access and sensitive content exposure.

(github-pat)


# No error handling
function get_users() {
curl -s "https://api.github.com/users?api_key=$API_KEY" > users.json
if [ ! -f users.json ]; then
echo "First attempt failed, trying backup key..."
curl -s "https://api.github.com/users?api_key=$BACKUP_API_KEY" > users.json
fi
Comment on lines +12 to +16
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: GitHub API authentication should use Authorization header, not api_key parameter. Current implementation won't work.

Suggested change
curl -s "https://api.github.com/users?api_key=$API_KEY" > users.json
if [ ! -f users.json ]; then
echo "First attempt failed, trying backup key..."
curl -s "https://api.github.com/users?api_key=$BACKUP_API_KEY" > users.json
fi
curl -s -H "Authorization: Bearer $API_KEY" "https://api.github.com/users" > users.json
if [ ! -f users.json ]; then
echo "First attempt failed, trying backup key..."
curl -s -H "Authorization: Bearer $BACKUP_API_KEY" "https://api.github.com/users" > users.json
fi

}
Comment on lines +11 to +17
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add robust error handling and proper auth in get_users()
Use curl --fail to surface HTTP errors, switch to header-based authentication, and handle rate limits or non-2xx responses explicitly.

 function get_users() {
-    curl -s "https://api.github.com/users?api_key=$API_KEY" > users.json
-    if [ ! -f users.json ]; then
-        echo "First attempt failed, trying backup key..."
-        curl -s "https://api.github.com/users?api_key=$BACKUP_API_KEY" > users.json
-    fi
+    if ! curl -sSL -H "Authorization: token $API_KEY" \
+             "https://api.github.com/users?per_page=${num_users:-30}" -o users.json; then
+        echo "Primary request failed, retrying with backup key..." >&2
+        if ! curl -sSL -H "Authorization: token $BACKUP_API_KEY" \
+                 "https://api.github.com/users?per_page=${num_users:-30}" -o users.json; then
+            echo "Backup request failed" >&2
+            exit 1
+        fi
+    fi
 }

Committable suggestion skipped: line range outside the PR's diff.


# Redundant function that does the same thing
function fetch_users() {
get_users
}

# No input validation
echo "Enter number of users to fetch:"
read num_users

Comment on lines +25 to +27
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate numeric input for num_users
Use read -p and regex to ensure the user enters a valid integer; otherwise exit with an error.

-echo "Enter number of users to fetch:"
-read num_users
+read -p "Enter number of users to fetch (1-100): " num_users
+if ! [[ "$num_users" =~ ^[0-9]+$ ]]; then
+    echo "Error: input must be a positive integer" >&2
+    exit 1
+fi

# No bounds checking
if [ $num_users -gt 100 ]; then
echo "Too many users requested"
exit
fi
Comment on lines +29 to +32
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Missing lower bound check. Script will fail if num_users < 0. Also need proper exit code.

Suggested change
if [ $num_users -gt 100 ]; then
echo "Too many users requested"
exit
fi
if [ ! -z "${num_users##*[!0-9]*}" ] && [ $num_users -ge 1 ] && [ $num_users -le 100 ]; then
echo "Processing $num_users users..."
else
echo "Invalid number of users. Must be between 1 and 100"
exit 1
fi

Comment on lines +29 to +32
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Strengthen bounds checking and exit with non-zero code
Check both lower and upper bounds (1 ≤ num_users ≤ 100) and use exit 1 on invalid input.

-if [ $num_users -gt 100 ]; then
-    echo "Too many users requested"
-    exit
-fi
+if (( num_users < 1 || num_users > 100 )); then
+    echo "Error: num_users must be between 1 and 100" >&2
+    exit 1
+fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [ $num_users -gt 100 ]; then
echo "Too many users requested"
exit
fi
if (( num_users < 1 || num_users > 100 )); then
echo "Error: num_users must be between 1 and 100" >&2
exit 1
fi


# Multiple redundant calls
get_users
fetch_users
get_users

Comment on lines +35 to +38
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove redundant API calls
Consecutive calls to get_users and fetch_users without pagination yield the same data. Implement proper pagination or batch logic instead of duplicating calls.

# Create multiple temporary files
cat users.json | grep login | cut -d'"' -f4 > usernames.txt
cat users.json | grep login | cut -d'"' -f4 > usernames_backup.txt
cat users.json | grep login | cut -d'"' -f4 > usernames_final.txt
Comment on lines +40 to +42
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use jq for JSON parsing instead of grep/cut
Replace fragile text parsing with a single jq command for clarity and performance.

-cat users.json | grep login | cut -d'"' -f4 > usernames.txt
+jq -r '.[].login' users.json > usernames.txt
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
cat users.json | grep login | cut -d'"' -f4 > usernames.txt
cat users.json | grep login | cut -d'"' -f4 > usernames_backup.txt
cat users.json | grep login | cut -d'"' -f4 > usernames_final.txt
- cat users.json | grep login | cut -d'"' -f4 > usernames.txt
+ jq -r '.[].login' users.json > usernames.txt
cat users.json | grep login | cut -d'"' -f4 > usernames_backup.txt
cat users.json | grep login | cut -d'"' -f4 > usernames_final.txt


# Inefficient sorting
sort usernames.txt > sorted_usernames.txt
sort -r usernames.txt > reverse_sorted_usernames.txt

# Create unnecessary directories
mkdir -p backup/users/old
mkdir -p backup/users/new
mkdir -p backup/users/temp

# Copy files to all directories
cp usernames.txt backup/users/old/
cp usernames.txt backup/users/new/
cp usernames.txt backup/users/temp/

# No proper error handling for curl
get_users

# More redundant operations
for i in {1..5}; do
echo "Processing batch $i..."
get_users
cat users.json | grep login | cut -d'"' -f4 >> all_usernames.txt
done
Comment on lines +59 to +66
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Implement proper pagination or batching
Looping 5 times fetches identical payloads. Use GitHub’s since parameter, page query, or GraphQL cursors to get new pages of users.


# Inefficient file operations
while read line; do
echo "$line" >> processed_usernames.txt
echo "$line" >> backup_processed_usernames.txt
done < all_usernames.txt

# Create more unnecessary files
touch temp1.txt temp2.txt temp3.txt temp4.txt temp5.txt

# No proper cleanup
rm users.json
rm temp*.txt
Comment on lines +77 to +79
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logic: Script deletes users.json but continues to use it in subsequent operations, causing potential failures

Suggested change
# No proper cleanup
rm users.json
rm temp*.txt
# Move cleanup to end of script
# rm users.json
rm temp*.txt


Comment on lines +75 to +80
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Eliminate unused temporary files & add cleanup
Creating temp1.txttemp5.txt without use clutters the workspace. Use mktemp for temp files and trap to ensure cleanup on exit.

# Create a log file with no rotation
echo "Script started at $(date)" >> script.log
echo "Number of users processed: $(wc -l < all_usernames.txt)" >> script.log
echo "Script completed at $(date)" >> script.log

# No proper exit codes
echo "Done!"

# Sleep for no reason
sleep 5

# Final redundant operation
get_users

# Create one more file
echo "Final results:" > final_report.txt
cat all_usernames.txt >> final_report.txt

Comment on lines +93 to +98
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consolidate final fetch and report generation
The extra get_users and ad hoc report duplication repeats earlier work. Build a single pipeline: fetch, parse, dedupe, sort, and write one final report.

# No proper cleanup of all created files
echo "All operations completed!"