Disclaimer Warning: This script may breach X's Terms of Service contract that you agreed to when creating an account. It was made for educational & informational purposes.
A Python utility for automating the extraction of usernames from X (formerly Twitter) follower links containing user IDs. This tool instructs you how to convert the follower.js file provided from X's Data Archive download into JSON, which contains all the user links but only consists of user IDs, and creates a clean list of all the usernames that the user IDs belong to.
**This script does NOT use the X API, as their free tier only allows fetching 1 user ID every 24 hours. It uses a (headless) web browser to get usernames from links that only contain
The X ID2Username Converter is designed to process a JSON file containing links consisting of user IDs to X user profiles and extract the actual usernames from those links. It uses Selenium to visit each link, handle any redirects, and extract the username from the resulting page.
- Automated Username Extraction: Visits X user ID links and extracts usernames automatically
- Real-time Saving: Writes usernames to output file as they're discovered to preserve progress
- Comprehensive Logging: Detailed logs with timestamps for troubleshooting
- Error Handling: Tracks and saves failed URLs for manual review
- Configurable: Settings managed via YAML configuration file
The script operates through the following process:
- Configuration Loading: Reads login credentials from a YAML file
- JSON Processing: Extracts user links from follower.json (converted from follower.js, exported from X Data Archive)
- Browser Automation: Uses Selenium to:
- Log in to your X account
- Visit each user link
- Handle redirects
- Extract the username from the final page
- Username Extraction: Employs multiple methods to find usernames:
- URL parameters (screen_name)
- URL path analysis
- Page content scanning (title, meta tags, DOM elements)
- Output Generation:
- Saves extracted usernames to followers.txt in real-time
- Records failed URLs in failed_urls.txt for manual review
- Python 3.6 or higher
- Chrome browser
- ChromeDriver (compatible with your Chrome version)
- Selenium
- PyYAML
- Clone this repository or download the script files
- Install required Python packages:
pip install selenium pyyaml- Ensure ChromeDriver is installed and available in your PATH or in the same directory as the script
Before running the tool, you need to download your X data archive:
- Log in to your X account
- Click on 'More' from the navigation bar
- Go to 'Settings and Privacy'
- Select 'Your Account'
- Click on 'Download an archive of your data'
- Click 'Request archive'
- Wait for X to process your request (this can take 24 hours or more)
- Once ready, X will send an email with a download link
After downloading your X data archive:
- Extract the .zip file contents
- Navigate to the
datafolder in the extracted archive - Locate the
follower.jsfile and copy it to inside this tool's directory (whereconfig.ymlandmain.pyis) - Convert
follower.jsto JSON format:- Open
follower.jsin a text editor - Remove the prefix
window.YTD.follower.part0 =from the first line (keeping the[bracket & everything after/below it) - Save the file as
follower.jsonin the tool's directory
- Open
Copy or rename config.yml.example to config.yml in the same directory as the script with the following structure:
x_credentials:
username: "name_here" # Your X login username
password: "pass_here" # Your X login password
account_name: "name_here" # Optional, only used for log identificationUsing a burner or alt account is recommended. Visiting thousands of profile links in a headless browser may trigger rate limits, verification prompts, or temporary freezes. If any, negative effects from this are unknown.
main.py: The main scriptconfig.yml: Configuration file with X credentialsfollower.json: JSON file containing follower information with userLinks
- Ensure you have the required files in place (main.py, config.yml, follower.json)
- Run the script:
python main.pyfollowers.txt: List of extracted usernames (one per line)failed_urls.txt: List of URLs where username extraction failedfollower_extractor_[timestamp].log: Detailed log file
If you encounter issues:
- Check the log file for specific error messages
- Verify your Chrome and ChromeDriver versions match
- For login issues, try running in non-headless mode by changing
headless = Falsein themain()function - Review failed_urls.txt to manually process URLs that couldn't be automatically processed, or use the
retry_failed_urls.pyscript to try them again. More info below
- Login Failures: X/Twitter frequently changes its login page structure. If login fails, the tool will save a screenshot named
login_error_[timestamp].pngwhich can help diagnose the issue. - Rate Limiting: If X detects too many requests, try increasing the wait time between requests.
- ChromeDriver Compatibility: Ensure your ChromeDriver version matches your Chrome browser version.
The script employs several techniques to efficiently extract usernames:
- Redirect Tracking: Monitors URL changes to ensure it's working with the final destination (I.e.
twitter.comtox.com) - Multiple Extraction Methods: Uses various techniques to extract usernames:
- URL parameter parsing
- URL path analysis
- DOM content scanning
- Error Recovery: Implements fallback mechanisms if primary extraction methods fail
After running through a large number of profile links, the account may hit a temporary rate limit, causing it to fail at retrieving some usernames. This often requires logging into the account to trigger a prompt for email or phone verification due to unusual activity.
When this happens, the failed links are saved to a file called failed_urls.txt.
You can retry these links later using the retry_failed_urls.py script. This script processes the links in failed_urls.txt, adds any successfully resolved usernames to followers.txt (or creates it if not present), and moves any still-failing links to a new file, failed_urls2.txt. These secondary failures are likely due to another temporary rate limit.
If you notice that links are consistently failing while the script is running, stop it and log into the account to trigger the verification prompt. Then, in the console logs, look for the last successfully processed link. Open failed_urls.txt, find that link, and delete it along with any links above it. Save the file, delete failed_urls2.txt, and then run retry_failed_urls.py again.
Feel free to fork this project and submit pull requests.
- After X failed URLs, stop the script and log a message indicating the account may be rate-limited or flagged for unusual activity. Inform the user that they may need to log into the account (sometimes in a new private/incognito browser window) to trigger email or phone verification. Add a config setting to define the number of allowed failed URLs, with
0disabling this feature. - For accounts with many thousands of followers, add logic to remove all the successful URLs in follower.json so it doesn't run through these again.
This project is licensed under the MIT License.
slapped together by rich