Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions LICENSE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
MIT License

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
94 changes: 94 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# DonnaSpecter

Welcome to the world of DonnaSpecter, an open-source software (OSS) project that's as sharp, adaptable, and indispensable as Donna Paulson herself. As a tribute to the legendary secretary from the TV show "Suits", this AI-powered personal assistant is ready to handle your daily tasks with the same flair and proficiency.

## Features

### Email Handling

Just as Donna manages the communications for Pearson Hardman, the `email_handler` module takes charge of your incoming and outgoing emails, ensuring not a single important message slips through the cracks.

### Task Scheduling

Ever wonder how Donna keeps track of all those appointments and meetings? With the `scheduler` module, DonnaSpecter has an impeccable sense of timing, ensuring you never miss a task or deadline.

### AI Modeling

Donna always seems to know exactly what's needed, doesn't she? The `ai_model` directory houses the intelligence behind DonnaSpecter, enabling her to anticipate your needs and offer solutions.

### Frontend and Backend

Every law firm needs its front-of-house and its backroom experts. The `frontend` and `backend` directories contain the code that keeps DonnaSpecter running smoothly, from the interface you see to the data processing happening behind the scenes.

### Security

In the world of legal drama, confidentiality is paramount. Our `security` module is designed to protect your data with as much diligence as Donna protects the secrets of Pearson Hardman.

### Database Management

Every case detail, every clause, every precedent - Donna remembers them all. The `database` module is the digital equivalent, storing and managing your data with precision.

### Microservices Architecture

Just as a law firm relies on the expertise of various departments, DonnaSpecter is built on a microservices architecture for scalable, reliable, and independent deployment of services.

### DevOps

Pearson Hardman wouldn't be a top law firm without its streamlined processes. The `cicd`, `kubernetes`, and `docker` directories reflect our commitment to efficiency and modern development practices.

## Getting Started

Ready to bring the efficiency of Pearson Hardman to your daily life? Here's how to start:

1. Clone the DonnaSpecter repository - no legal paperwork required. You can find the repository at [this link](https://github.com/shadowaxe99/DonnaSpecter).
2. Enter the `src` directory, the heart of our operation.
3. Run the `main.py` script to wake DonnaSpecter and get started with your new personal assistant.

Ensure you have set up the necessary environment variables as specified in `shared_dependencies.md` and that your system is equipped with all the necessary dependencies.

## Contributing

Just as Mike Ross found his place at Pearson Hardman, we welcome new contributors to DonnaSpecter. Check out `CONTRIBUTING.md` for your orientation.

## License

DonnaSpecter operates under the MIT License. For the legalities, see `LICENSE.md`.

## Reporting Issues

Just like Harvey Specter, we believe in taking matters into our own hands. If you encounter an issue, don't wait around - "fix it yourself." However, if you believe that the issue may affect other users or require a more substantial fix, don't hesitate to raise an issue. This way, we can all contribute to improving DonnaSpecter and making it an even more effective assistant. After all, we're a team, and "that's how we win."

## How to Use - A Guide for the Mikes of the World

Ever feel like a fraud in a world of Harveys? Don't worry, Mike. We've got your back. Here's a simple guide to using DonnaSpecter:


### Step 1: Get the Goods
First, you need to get DonnaSpecter onto your computer. This is called "cloning" the repository. Don't worry, it's perfectly legal. In your terminal, navigate to the directory where you want to put DonnaSpecter, and enter:

git clone https://github.com/shadowaxe99/DonnaSpecter.git

Now you've got your own copy of DonnaSpecter!

### Step 2: Enter the World
Navigate into the heart of the operation, the `src` directory. Just type:

cd DonnaSpecter/src

You're in.

### Step 3: Wake Donna Up
Start the program by running the `main.py` script. This is like waking Donna up in the morning. Type:

python main.py

DonnaSpecter should now be running and ready to assist you.

### Step 4: Ask for Help
DonnaSpecter has a lot of functionalities. If you're not sure where to start, just ask for help. Donna is here to assist you, and she's got a whole lot of tricks up her sleeve.

Remember, as a wise man once said, "When you are backed against the wall, break the goddamn thing down." So if you encounter any obstacles in your journey with DonnaSpecter, don't hesitate to reach out and report any issues. We're in this together, and we'll break down those walls as a team. Don't be afraid to dive in and learn as you go. In the immortal words of Harvey Specter and I am sure McKay would agree, "the only time success comes before work is in the dictionary." DonnaSpecter is here to make your work more manageable. "Remember, the road to success is still yours to travel and it is measured by how many lines of code you write" - McKay Wrigley, somewhere, so you definitely needed this to help you catch up.

This guide assumes that the user has basic knowledge of how to use a terminal and has Python installed on their computer. If this isn't the case, they may need to look up some additional resources to get started.

"Life is this, I like this." - Harvey Specter, and hopefully you after using DonnaSpecter. Enjoy your journey with your new AI-powered assistant. It's time to suit up and get to work!
33 changes: 33 additions & 0 deletions availability_analysis.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
```python
import datetime
from ai_assistant.scheduler import schedule

def check_availability(user_profile, start_time, end_time):
"""
Check the availability of the user within a given time frame.
"""
user_schedule = schedule[user_profile]
for event in user_schedule:
if event['start_time'] <= start_time < event['end_time'] or event['start_time'] < end_time <= event['end_time']:
return False
return True

def find_free_slots(user_profile, duration, start_date=datetime.datetime.now(), end_date=datetime.datetime.now() + datetime.timedelta(days=7)):
"""
Find free slots in the user's schedule that are at least as long as the specified duration.
"""
free_slots = []
current_time = start_date
while current_time + duration <= end_date:
if check_availability(user_profile, current_time, current_time + duration):
free_slots.append((current_time, current_time + duration))
current_time += duration
return free_slots

def suggest_times(user_profile, duration, num_suggestions=5):
"""
Suggest a number of free slots in the user's schedule that are at least as long as the specified duration.
"""
free_slots = find_free_slots(user_profile, duration)
return free_slots[:num_suggestions]
```
37 changes: 37 additions & 0 deletions cognitive_load_balancing.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
```python
import datetime
from ai_assistant.scheduler import schedule
from ai_assistant.task_automation import task_list

def balanceLoad(user_profile, meeting_data, task_list):
"""
Function to balance cognitive load by distributing tasks and meetings evenly
"""
# Get the total number of tasks and meetings
total_items = len(task_list) + len(meeting_data)

# Calculate the average load per day
average_load = total_items / 7

# Distribute the tasks and meetings evenly across the week
for i in range(7):
day_load = 0
while day_load < average_load:
if len(task_list) > 0:
schedule(user_profile, task_list.pop(0), datetime.datetime.now() + datetime.timedelta(days=i))
day_load += 1
if len(meeting_data) > 0:
schedule(user_profile, meeting_data.pop(0), datetime.datetime.now() + datetime.timedelta(days=i))
day_load += 1

return user_profile

def updateLoadBalancing(user_profile, meeting_data, task_list):
"""
Function to update the load balancing when a new task or meeting is added
"""
# Add the new task or meeting to the schedule
balanceLoad(user_profile, meeting_data, task_list)

return user_profile
```
36 changes: 36 additions & 0 deletions content_curation.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
```python
import os
import json
from ai_assistant.shared_dependencies import user_profile, content_data

class ContentCuration:
def __init__(self, user_profile, content_data):
self.user_profile = user_profile
self.content_data = content_data

def curate_content(self):
curated_content = []
for content in self.content_data:
if self.user_profile['interests'] in content['tags']:
curated_content.append(content)
return curated_content

def save_curated_content(self, curated_content):
with open('curated_content.json', 'w') as json_file:
json.dump(curated_content, json_file)

def load_curated_content(self):
if os.path.exists('curated_content.json'):
with open('curated_content.json') as json_file:
curated_content = json.load(json_file)
return curated_content
else:
return []

if __name__ == "__main__":
content_curation = ContentCuration(user_profile, content_data)
curated_content = content_curation.curate_content()
content_curation.save_curated_content(curated_content)
loaded_content = content_curation.load_curated_content()
print(loaded_content)
```
41 changes: 41 additions & 0 deletions contextual_understanding.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
```python
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize, sent_tokenize

# Importing shared dependencies
from shared_dependencies import user_profile, meeting_data

class ContextUnderstanding:
def __init__(self):
self.stop_words = set(stopwords.words('english'))

def tokenize(self, text):
return word_tokenize(text)

def remove_stopwords(self, tokenized_text):
return [word for word in tokenized_text if word not in self.stop_words]

def understand_context(self, text):
tokenized = self.tokenize(text)
tokenized = self.remove_stopwords(tokenized)

tagged = nltk.pos_tag(tokenized)

namedEnt = nltk.ne_chunk(tagged)
namedEnt.draw()

return namedEnt

context_understanding = ContextUnderstanding()

def update_context():
for meeting in meeting_data:
context = context_understanding.understand_context(meeting['description'])
meeting['context'] = context

def get_context(user_id):
user_meetings = [meeting for meeting in meeting_data if meeting['user_id'] == user_id]
user_context = [meeting['context'] for meeting in user_meetings]
return user_context
```
61 changes: 61 additions & 0 deletions continuous_learning.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
```python
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
import pandas as pd

class ContinuousLearning:
def __init__(self, user_profile, learning_progress):
self.user_profile = user_profile
self.learning_progress = learning_progress
self.model = None

def load_data(self):
# Load data from user_profile and learning_progress
data = pd.concat([self.user_profile, self.learning_progress], axis=1)
return data

def preprocess_data(self, data):
# Preprocess data, handle missing values, convert categorical data to numerical, etc.
data = data.dropna()
data = pd.get_dummies(data)
return data

def split_data(self, data):
# Split data into training and testing sets
X = data.drop('target', axis=1)
y = data['target']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
return X_train, X_test, y_train, y_test

def train_model(self, X_train, y_train):
# Train model using RandomForestClassifier
self.model = RandomForestClassifier(n_estimators=100)
self.model.fit(X_train, y_train)

def evaluate_model(self, X_test, y_test):
# Evaluate model performance
y_pred = self.model.predict(X_test)
print("Accuracy:", accuracy_score(y_test, y_pred))
print(classification_report(y_test, y_pred))

def update_learning_progress(self, learning_progress):
# Update learning progress
self.learning_progress = learning_progress

def run(self):
# Run continuous learning process
data = self.load_data()
data = self.preprocess_data(data)
X_train, X_test, y_train, y_test = self.split_data(data)
self.train_model(X_train, y_train)
self.evaluate_model(X_test, y_test)

if __name__ == "__main__":
user_profile = pd.read_csv('user_profile.csv')
learning_progress = pd.read_csv('learning_progress.csv')
continuous_learning = ContinuousLearning(user_profile, learning_progress)
continuous_learning.run()
```
61 changes: 61 additions & 0 deletions cross_platform_syncing.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
```python
import os
import json
from ai_assistant.shared_dependencies import user_profile, meeting_data, notification_settings, task_list

class CrossPlatformSync:
def __init__(self):
self.user_profile = user_profile
self.meeting_data = meeting_data
self.notification_settings = notification_settings
self.task_list = task_list

def sync_data(self, platform):
if platform == 'google':
self.sync_google()
elif platform == 'apple':
self.sync_apple()
elif platform == 'microsoft':
self.sync_microsoft()
else:
print("Invalid platform")

def sync_google(self):
# Logic to sync data with Google services
pass

def sync_apple(self):
# Logic to sync data with Apple services
pass

def sync_microsoft(self):
# Logic to sync data with Microsoft services
pass

def export_data(self, file_path):
data = {
'user_profile': self.user_profile,
'meeting_data': self.meeting_data,
'notification_settings': self.notification_settings,
'task_list': self.task_list
}
with open(file_path, 'w') as f:
json.dump(data, f)

def import_data(self, file_path):
if os.path.exists(file_path):
with open(file_path, 'r') as f:
data = json.load(f)
self.user_profile = data['user_profile']
self.meeting_data = data['meeting_data']
self.notification_settings = data['notification_settings']
self.task_list = data['task_list']
else:
print("File does not exist")

if __name__ == "__main__":
cross_platform_sync = CrossPlatformSync()
cross_platform_sync.sync_data('google')
cross_platform_sync.export_data('data.json')
cross_platform_sync.import_data('data.json')
```
Loading