Merge branch 'v3' into main
This commit is contained in:
commit
3511dbc88e
102
README.md
102
README.md
@ -148,13 +148,12 @@ This file contains sensitive information. Never share or commit this file to ver
|
||||
- Replace with your LinkedIn account email address
|
||||
- `password: [Your LinkedIn password]`
|
||||
- Replace with your LinkedIn account password
|
||||
- `openai_api_key: [Your OpenAI API key]`
|
||||
- `llm_api_key: [Your OpenAI or Ollama API key]`
|
||||
- Replace with your OpenAI API key for GPT integration
|
||||
- To obtain an API key, follow the tutorial at: https://medium.com/@lorenzozar/how-to-get-your-own-openai-api-key-f4d44e60c327
|
||||
- Note: You need to add credit to your OpenAI account to use the API. You can add credit by visiting the [OpenAI billing dashboard](https://platform.openai.com/account/billing).
|
||||
|
||||
|
||||
|
||||
### 2. config.yaml
|
||||
|
||||
This file defines your job search parameters and bot behavior. Each section contains options that you can customize:
|
||||
@ -211,7 +210,22 @@ This file defines your job search parameters and bot behavior. Each section cont
|
||||
- Sales
|
||||
- Marketing
|
||||
```
|
||||
#### 2.1 config.yaml - Customize LLM model endpoint
|
||||
|
||||
- `llm_model_type`:
|
||||
- Choose the model type, supported: openai / ollama / claude
|
||||
- `llm_model`:
|
||||
- Choose the LLM model, currently supported:
|
||||
- openai: gpt-4o
|
||||
- ollama: llama2, mistral:v0.3
|
||||
- claude: any model
|
||||
- `llm_api_url`:
|
||||
- Link of the API endpoint for the LLM model
|
||||
- openai: https://api.pawan.krd/cosmosrp/v1
|
||||
- ollama: http://127.0.0.1:11434/
|
||||
- claude: https://api.anthropic.com/v1
|
||||
- Note: To run local Ollama, follow the guidelines here: [Guide to Ollama deployment](https://github.com/ollama/ollama)
|
||||
|
||||
### 3. plain_text_resume.yaml
|
||||
|
||||
This file contains your resume information in a structured format. Fill it out with your personal details, education, work experience, and skills. This information is used to auto-fill application forms and generate customized resumes.
|
||||
@ -522,19 +536,83 @@ Using this folder as a guide can be particularly helpful for:
|
||||
python main.py --resume /path/to/your/resume.pdf
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
TODO ):
|
||||
### Troubleshooting Common Issues
|
||||
|
||||
#### 1. OpenAI API Rate Limit Errors
|
||||
|
||||
**Error Message:**
|
||||
|
||||
openai.RateLimitError: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}}
|
||||
|
||||
**Solution:**
|
||||
- Check your OpenAI API billing settings at https://platform.openai.com/account/billing
|
||||
- Ensure you have added a valid payment method to your OpenAI account
|
||||
- Note that ChatGPT Plus subscription is different from API access
|
||||
- If you've recently added funds or upgraded, wait 12-24 hours for changes to take effect
|
||||
- Free tier has a 3 RPM limit; spend at least $5 on API usage to increase
|
||||
|
||||
#### 2. LinkedIn Easy Apply Button Not Found
|
||||
|
||||
**Error Message:**
|
||||
|
||||
Exception: No clickable 'Easy Apply' button found
|
||||
|
||||
**Solution:**
|
||||
- Ensure that you're logged into LinkedIn properly
|
||||
- Check if the job listings you're targeting actually have the "Easy Apply" option
|
||||
- Verify that your search parameters in the `config.yaml` file are correct and returning jobs with the "Easy Apply" button
|
||||
- Try increasing the wait time for page loading in the script to ensure all elements are loaded before searching for the button
|
||||
|
||||
#### 3. Incorrect Information in Job Applications
|
||||
|
||||
**Issue:** Bot provides inaccurate data for experience, CTC, and notice period
|
||||
|
||||
**Solution:**
|
||||
- Update prompts for professional experience specificity
|
||||
- Add fields in `config.yaml` for current CTC, expected CTC, and notice period
|
||||
- Modify bot logic to use these new config fields
|
||||
|
||||
#### 4. YAML Configuration Errors
|
||||
|
||||
**Error Message:**
|
||||
|
||||
yaml.scanner.ScannerError: while scanning a simple key
|
||||
|
||||
**Solution:**
|
||||
- Copy example `config.yaml` and modify gradually
|
||||
- Ensure proper YAML indentation and spacing
|
||||
- Use a YAML validator tool
|
||||
- Avoid unnecessary special characters or quotes
|
||||
|
||||
#### 5. Bot Logs In But Doesn't Apply to Jobs
|
||||
|
||||
**Issue:** Bot searches for jobs but continues scrolling without applying
|
||||
|
||||
**Solution:**
|
||||
- Check for security checks or CAPTCHAs
|
||||
- Verify `config.yaml` job search parameters
|
||||
- Ensure your LinkedIn profile meets job requirements
|
||||
- Review console output for error messages
|
||||
|
||||
### General Troubleshooting Tips
|
||||
|
||||
- Use the latest version of the script
|
||||
- Verify all dependencies are installed and updated
|
||||
- Check internet connection stability
|
||||
- Use VPNs cautiously to avoid triggering LinkedIn security
|
||||
- Clear browser cache and cookies if issues persist
|
||||
|
||||
For further assistance, please create an issue on the [GitHub repository](https://github.com/feder-cr/LinkedIn_AIHawk_automatic_job_application/issues) with detailed information about your problem, including error messages and your configuration (with sensitive information removed).
|
||||
|
||||
### Additional Resources
|
||||
|
||||
- [Video Tutorial: How to set up LinkedIn_AIHawk](https://youtu.be/gdW9wogHEUM)
|
||||
- [OpenAI API Documentation](https://platform.openai.com/docs/)
|
||||
- [LinkedIn Developer Documentation](https://developer.linkedin.com/)
|
||||
- [Lang Chain Developer Documentation](https://python.langchain.com/v0.2/docs/integrations/components/)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **Carefully read logs and output :** Most of the errors are verbosely reflected just watch the output and try to find the root couse.
|
||||
- **If nothing works by unknown reason:** Use tested OS. Reboot and/or update OS. Use new clean venv. Try update Python to the tested version.
|
||||
- **ChromeDriver Issues:** Ensure ChromeDriver is compatible with your installed Chrome version.
|
||||
- **Missing Files:** Verify that all necessary files are present in the data folder.
|
||||
- **Invalid YAML:** Check your YAML files for syntax errors . Try to use external YAML validators e.g. https://www.yamllint.com/
|
||||
- **OpenAI endpoint isues**: Try to check possible limits\blocking at their side
|
||||
|
||||
If you encounter any issues, you can open an issue on [GitHub](https://github.com/feder-cr/linkedIn_auto_jobs_applier_with_AI/issues).
|
||||
Please add valuable details to the subject and to the description. If you need new feature then please reflect this.
|
||||
I'll be more than happy to assist you!
|
||||
|
@ -39,4 +39,8 @@ companyBlacklist:
|
||||
|
||||
titleBlacklist:
|
||||
- word1
|
||||
- word2
|
||||
- word2
|
||||
|
||||
llm_model_type: openai
|
||||
llm_model: gpt-4o
|
||||
llm_api_url: https://api.pawan.krd/cosmosrp/v1
|
@ -1,3 +1,3 @@
|
||||
email: myemaillinkedin@gmail.com
|
||||
password: ImpossiblePassowrd10
|
||||
openai_api_key: sk-11KRr4uuTwpRGfeRTfj1T9BlbkFJjP8QTrswHU1yGruru2FR
|
||||
llm_api_key: 'sk-11KRr4uuTwpRGfeRTfj1T9BlbkFJjP8QTrswHU1yGruru2FR'
|
@ -37,3 +37,7 @@ companyBlacklist:
|
||||
- Crossover
|
||||
|
||||
titleBlacklist:
|
||||
|
||||
llm_model_type: openai
|
||||
llm_model: 'gpt-4o'
|
||||
llm_api_url: https://api.pawan.krd/cosmosrp/v1'
|
@ -1,3 +1,3 @@
|
||||
email: myemaillinkedin@gmail.com
|
||||
password: ImpossiblePassowrd10
|
||||
openai_api_key: sk-11KRr4uuTwpRGfeRTfj1T9BlbkFJjP8QTrswHU1yGruru2FR
|
||||
llm_api_key: 'sk-11KRr4uuTwpRGfeRTfj1T9BlbkFJjP8QTrswHU1yGruru2FR'
|
17
main.py
17
main.py
@ -101,7 +101,7 @@ class ConfigValidator:
|
||||
@staticmethod
|
||||
def validate_secrets(secrets_yaml_path: Path) -> tuple:
|
||||
secrets = ConfigValidator.validate_yaml_file(secrets_yaml_path)
|
||||
mandatory_secrets = ['email', 'password', 'openai_api_key']
|
||||
mandatory_secrets = ['email', 'password']
|
||||
|
||||
for secret in mandatory_secrets:
|
||||
if secret not in secrets:
|
||||
@ -111,10 +111,7 @@ class ConfigValidator:
|
||||
raise ConfigError(f"Invalid email format in secrets file {secrets_yaml_path}.")
|
||||
if not secrets['password']:
|
||||
raise ConfigError(f"Password cannot be empty in secrets file {secrets_yaml_path}.")
|
||||
if not secrets['openai_api_key']:
|
||||
raise ConfigError(f"OpenAI API key cannot be empty in secrets file {secrets_yaml_path}.")
|
||||
|
||||
return secrets['email'], str(secrets['password']), secrets['openai_api_key']
|
||||
return secrets['email'], str(secrets['password']), secrets['llm_api_key']
|
||||
|
||||
class FileManager:
|
||||
@staticmethod
|
||||
@ -158,14 +155,14 @@ def init_browser() -> webdriver.Chrome:
|
||||
except Exception as e:
|
||||
raise RuntimeError(f"Failed to initialize browser: {str(e)}")
|
||||
|
||||
def create_and_run_bot(email: str, password: str, parameters: dict, openai_api_key: str):
|
||||
def create_and_run_bot(email, password, parameters, llm_api_key):
|
||||
try:
|
||||
style_manager = StyleManager()
|
||||
resume_generator = ResumeGenerator()
|
||||
with open(parameters['uploads']['plainTextResume'], "r") as file:
|
||||
plain_text_resume = file.read()
|
||||
resume_object = Resume(plain_text_resume)
|
||||
resume_generator_manager = FacadeManager(openai_api_key, style_manager, resume_generator, resume_object, Path("data_folder/output"))
|
||||
resume_generator_manager = FacadeManager(llm_api_key, style_manager, resume_generator, resume_object, Path("data_folder/output"))
|
||||
os.system('cls' if os.name == 'nt' else 'clear')
|
||||
resume_generator_manager.choose_style()
|
||||
os.system('cls' if os.name == 'nt' else 'clear')
|
||||
@ -175,7 +172,7 @@ def create_and_run_bot(email: str, password: str, parameters: dict, openai_api_k
|
||||
browser = init_browser()
|
||||
login_component = LinkedInAuthenticator(browser)
|
||||
apply_component = LinkedInJobManager(browser)
|
||||
gpt_answerer_component = GPTAnswerer(openai_api_key)
|
||||
gpt_answerer_component = GPTAnswerer(parameters, llm_api_key)
|
||||
bot = LinkedInBotFacade(login_component, apply_component)
|
||||
bot.set_secrets(email, password)
|
||||
bot.set_job_application_profile_and_resume(job_application_profile_object, resume_object)
|
||||
@ -197,12 +194,12 @@ def main(resume: Path = None):
|
||||
secrets_file, config_file, plain_text_resume_file, output_folder = FileManager.validate_data_folder(data_folder)
|
||||
|
||||
parameters = ConfigValidator.validate_config(config_file)
|
||||
email, password, openai_api_key = ConfigValidator.validate_secrets(secrets_file)
|
||||
email, password, llm_api_key = ConfigValidator.validate_secrets(secrets_file)
|
||||
|
||||
parameters['uploads'] = FileManager.file_paths_to_dict(resume, plain_text_resume_file)
|
||||
parameters['outputFileDirectory'] = output_folder
|
||||
|
||||
create_and_run_bot(email, password, parameters, openai_api_key)
|
||||
create_and_run_bot(email, password, parameters, llm_api_key)
|
||||
except ConfigError as ce:
|
||||
print(f"Configuration error: {str(ce)}")
|
||||
print("Refer to the configuration guide for troubleshooting: https://github.com/feder-cr/LinkedIn_AIHawk_automatic_job_application/blob/main/readme.md#configuration")
|
||||
|
72
src/gpt.py
72
src/gpt.py
@ -3,7 +3,8 @@ import os
|
||||
import re
|
||||
import textwrap
|
||||
from datetime import datetime
|
||||
from typing import Dict, List
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, List, Union
|
||||
from pathlib import Path
|
||||
from dotenv import load_dotenv
|
||||
from langchain_core.messages.ai import AIMessage
|
||||
@ -17,10 +18,66 @@ import src.strings as strings
|
||||
|
||||
load_dotenv()
|
||||
|
||||
class AIModel(ABC):
|
||||
@abstractmethod
|
||||
def invoke(self, prompt: str) -> str:
|
||||
pass
|
||||
|
||||
class OpenAIModel(AIModel):
|
||||
def __init__(self, api_key: str, llm_model: str, llm_api_url: str):
|
||||
from langchain_openai import ChatOpenAI
|
||||
self.model = ChatOpenAI(model_name=llm_model, openai_api_key=api_key,
|
||||
temperature=0.4, base_url=llm_api_url)
|
||||
|
||||
def invoke(self, prompt: str) -> str:
|
||||
print("invoke in openai")
|
||||
response = self.model.invoke(prompt)
|
||||
return response
|
||||
|
||||
class ClaudeModel(AIModel):
|
||||
def __init__(self, api_key: str, llm_model: str, llm_api_url: str):
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
self.model = ChatAnthropic(model=llm_model, api_key=api_key,
|
||||
temperature=0.4, base_url=llm_api_url)
|
||||
|
||||
def invoke(self, prompt: str) -> str:
|
||||
response = self.model.invoke(prompt)
|
||||
return response
|
||||
|
||||
class OllamaModel(AIModel):
|
||||
def __init__(self, api_key: str, llm_model: str, llm_api_url: str):
|
||||
from langchain_ollama import ChatOllama
|
||||
self.model = ChatOllama(model=llm_model, base_url=llm_api_url)
|
||||
|
||||
def invoke(self, prompt: str) -> str:
|
||||
response = self.model.invoke(prompt)
|
||||
return response
|
||||
|
||||
class AIAdapter:
|
||||
def __init__(self, config: dict, api_key: str):
|
||||
self.model = self._create_model(config, api_key)
|
||||
|
||||
def _create_model(self, config: dict, api_key: str) -> AIModel:
|
||||
llm_model_type = config['llm_model_type']
|
||||
llm_model = config['llm_model']
|
||||
llm_api_url = config['llm_api_url']
|
||||
print('Using {0} with {1} from {2}'.format(llm_model_type, llm_model, llm_api_url))
|
||||
|
||||
if llm_model_type == "openai":
|
||||
return OpenAIModel(api_key, llm_model, llm_api_url)
|
||||
elif llm_model_type == "claude":
|
||||
return ClaudeModel(api_key, llm_model, llm_api_url)
|
||||
elif llm_model_type == "ollama":
|
||||
return OllamaModel(api_key, llm_model, llm_api_url)
|
||||
else:
|
||||
raise ValueError(f"Unsupported model type: {model_type}")
|
||||
|
||||
def invoke(self, prompt: str) -> str:
|
||||
return self.model.invoke(prompt)
|
||||
|
||||
class LLMLogger:
|
||||
|
||||
def __init__(self, llm: ChatOpenAI):
|
||||
def __init__(self, llm: Union[OpenAIModel, OllamaModel, ClaudeModel]):
|
||||
self.llm = llm
|
||||
|
||||
@staticmethod
|
||||
@ -78,12 +135,12 @@ class LLMLogger:
|
||||
|
||||
class LoggerChatModel:
|
||||
|
||||
def __init__(self, llm: ChatOpenAI):
|
||||
def __init__(self, llm: Union[OpenAIModel, OllamaModel, ClaudeModel]):
|
||||
self.llm = llm
|
||||
|
||||
def __call__(self, messages: List[Dict[str, str]]) -> str:
|
||||
# Call the LLM with the provided messages and log the response.
|
||||
reply = self.llm(messages)
|
||||
reply = self.llm.invoke(messages)
|
||||
parsed_reply = self.parse_llmresult(reply)
|
||||
LLMLogger.log_request(prompts=messages, parsed_reply=parsed_reply)
|
||||
return reply
|
||||
@ -113,10 +170,9 @@ class LoggerChatModel:
|
||||
|
||||
|
||||
class GPTAnswerer:
|
||||
def __init__(self, openai_api_key):
|
||||
self.llm_cheap = LoggerChatModel(
|
||||
ChatOpenAI(model_name="gpt-4o-mini", openai_api_key=openai_api_key, temperature=0.4)
|
||||
)
|
||||
def __init__(self, config, llm_api_key):
|
||||
self.ai_adapter = AIAdapter(config, llm_api_key)
|
||||
self.llm_cheap = LoggerChatModel(self.ai_adapter)
|
||||
@property
|
||||
def job_description(self):
|
||||
return self.job.description
|
||||
|
@ -1,7 +1,13 @@
|
||||
from typing import Dict, List
|
||||
from linkedin_api import Linkedin
|
||||
from typing import Optional, Union, Literal
|
||||
from urllib.parse import urlencode
|
||||
from urllib.parse import quote, urlencode
|
||||
import logging
|
||||
import json
|
||||
|
||||
# set log to all debug
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
|
||||
class LinkedInEvolvedAPI(Linkedin):
|
||||
def __init__(self, username, password):
|
||||
@ -106,7 +112,7 @@ class LinkedInEvolvedAPI(Linkedin):
|
||||
if remote:
|
||||
query["selectedFilters"]["workplaceType"] = f"List({','.join(remote)})"
|
||||
if easy_apply:
|
||||
query["selectedFilters"]["easyApply"] = "List(true)"
|
||||
query["selectedFilters"]["applyWithLinkedin"] = "List(true)"
|
||||
|
||||
query["selectedFilters"]["timePostedRange"] = f"List(r{listed_at})"
|
||||
query["spellCorrectionEnabled"] = "true"
|
||||
@ -160,9 +166,103 @@ class LinkedInEvolvedAPI(Linkedin):
|
||||
self.logger.debug(f"results grew to {len(results)}")
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def get_fields_for_easy_apply(self,job_id:str) -> List[Dict]:
|
||||
"""Get fields needed for easy apply jobs.
|
||||
|
||||
:param job_id: Job ID
|
||||
:type job_id: str
|
||||
:return: Fields
|
||||
:rtype: dict
|
||||
"""
|
||||
|
||||
cookies = self.client.session.cookies.get_dict()
|
||||
cookie_str = "; ".join([f"{k}={v}" for k, v in cookies.items()])
|
||||
|
||||
headers: Dict[str, str] = self._headers()
|
||||
|
||||
|
||||
headers["Accept"] = "application/vnd.linkedin.normalized+json+2.1"
|
||||
headers["csrf-token"] = cookies["JSESSIONID"].replace('"', "")
|
||||
headers["Cookie"] = cookie_str
|
||||
headers["Connection"] = "keep-alive"
|
||||
|
||||
|
||||
default_params = {
|
||||
"decorationId": "com.linkedin.voyager.dash.deco.jobs.OnsiteApplyApplication-67",
|
||||
"jobPostingUrn": f"urn:li:fsd_jobPosting:{job_id}",
|
||||
"q": "jobPosting",
|
||||
}
|
||||
|
||||
default_params = urlencode(default_params)
|
||||
res = self._fetch(
|
||||
f"/voyagerJobsDashOnsiteApplyApplication?{default_params}",
|
||||
headers=headers,
|
||||
cookies=cookies,
|
||||
)
|
||||
|
||||
match res.status_code:
|
||||
case 200:
|
||||
pass
|
||||
case 409:
|
||||
self.logger.error("Failed to fetch fields for easy apply job because already applied to this job!")
|
||||
return []
|
||||
case _:
|
||||
self.logger.error("Failed to fetch fields for easy apply job")
|
||||
return []
|
||||
|
||||
try:
|
||||
data = res.json()
|
||||
except ValueError:
|
||||
self.logger.error("Failed to parse JSON response")
|
||||
return []
|
||||
|
||||
form_components = []
|
||||
|
||||
for item in data.get("included", []):
|
||||
if 'formComponent' in item:
|
||||
urn = item['urn']
|
||||
try:
|
||||
title = item['title']['text']
|
||||
except TypeError:
|
||||
title = urn
|
||||
|
||||
form_component_type = list(item['formComponent'].keys())[0]
|
||||
form_component_details = item['formComponent'][form_component_type]
|
||||
|
||||
component_info = {
|
||||
'title': title,
|
||||
'urn': urn,
|
||||
'formComponentType': form_component_type,
|
||||
}
|
||||
|
||||
if 'textSelectableOptions' in form_component_details:
|
||||
options = [
|
||||
opt['optionText']['text'] for opt in form_component_details['textSelectableOptions']
|
||||
]
|
||||
component_info['selectableOptions'] = options
|
||||
elif 'selectableOptions' in form_component_details:
|
||||
options = [
|
||||
opt['textSelectableOption']['optionText']['text']
|
||||
for opt in form_component_details['selectableOptions']
|
||||
]
|
||||
component_info['selectableOptions'] = options
|
||||
|
||||
form_components.append(component_info)
|
||||
|
||||
return form_components
|
||||
|
||||
## EXAMPLE USAGE
|
||||
if __name__ == "__main__":
|
||||
api: LinkedInEvolvedAPI = LinkedInEvolvedAPI(username="", password="")
|
||||
jobs = api.search_jobs(keywords="Frontend Developer", location_name="Italia", limit=5, easy_apply=True, offset=1)
|
||||
for job in jobs:
|
||||
job_id: str = job["job_id"]
|
||||
|
||||
fields = api.get_fields_for_easy_apply(job_id)
|
||||
for field in fields:
|
||||
print(field)
|
||||
|
||||
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user