-
Notifications
You must be signed in to change notification settings - Fork 2
5. Frontend
Navigation
- 5.1 Dashboard
- 5.2 Overview
- 5.3 Calendar
- 5.4 SearchBar
- 5.5 SmartScribe
- 5.6 Application Tracker
- 5.7 Chatbot
Figure 3 (b): Final Dashboard Page.
The dashboard enables users to gain a clear view of their most important updates. In planning, the dashboard was created as the initial start point when users log into the app, meaning that a clear and concise information display was the main priority. Todo, Outlook, Agenda and Teams are separated into a grid view with the aim of minimising scrolling by displaying all tiles onto the available screen. The Todo[13] and Agenda displays are created using the Graph Toolkit[14], which also includes UI for each component.
public async Task BeginEmail()
{
// Get the user emails
if (count > 0)
{
var graphTimeZone = user.GetUserGraphTimeZone();
dateTimeFormat = $"{user.GetUserGraphDateFormat()} {user.GetUserGraphTimeFormat()}";
//Get mail from inbox
var mailPage = await graphClient.Me
.MailFolders["Inbox"]
.Messages
.GetAsync(config =>
{
config.Headers.Add("Prefer", $"outlook.timezone=\"{graphTimeZone}\"");
config.QueryParameters.Select = new string[] { "subject", "sender", "bodyPreview", "receivedDateTime", "isRead" };
config.QueryParameters.Orderby = new string[] { "receivedDateTime desc" };
});
allMessages = mailPage?.Value ?? new List<Message>();
}
}The Outlook display is created from a call from the Graph API[15] where the subject, sender, bodyPreview, receivedDateTime and isRead properties are queried[16], as seen in the code above. The emails are sorted by descending receivedDateTime to ensure most recent emails are at the top. subject, sender and bodyPreview are displayed in the Outlook tile[16] and only emails that haven’t been read (when isRead is false) will be displayed to reduce clutter. To retrieve the Teams that the user is participating in, there is another call to the Graph API which is displayed in the code below[21].
public async Task BeginTeams()
{
if (count > 0)
{
var teamsPage = await graphClient.Me.JoinedTeams
.GetAsync();
teams = teamsPage.Value;
}
}
Figure 4 (b): Final Overview Page.
The Overview page was developed to act as a central hub for multiple Microsoft Apps. In the current version of Nexus, it shows the users emails, teams, chats which can be tabbed between as well as and calendar events in an organised, minimalist layout. Furthermore, the page allows the user to create filters which can group together specific users, teams and calendar events. Users can then select their chosen filter easily. The UI of the page uses Microsoft FluentUI to create dropdowns, tabs and buttons, in order to create to fluid experience.
The group selection is performed by using three FluentComboBox objects[17] which contain for dropdown with items that can be selected. The selected item is then stored in a variable, and this is done for Emails, Teams and Calendars. The "Add Group" button[18] is bound to the addToGroup function which creates a GroupItem and adds it to the dropdown at the top of the screen, before clearing the data entered. The page also keeps track of four global variables which contain the currently selected email, team, channel and chat so that the selected item is saved when switching between tabs.
public class GroupItem
{
public string? Name { get; set; }
public string? mailName { get; set; }
public string? teamName { get; set; }
public string? calName { get; set; }
}
void addToGroup(string m, string t, string c)
{
if(m == "") { m = "All"; };
if(t == "") { t = "All"; };
if (c == "") { c = "All"; };
if (!string.IsNullOrWhiteSpace(placename))
{
myGroups.Add(new GroupItem { Name = placename, mailName = m, teamName = t, calName = c });
}
placename = String.Empty;
selectedCalOption = default;
calValue = String.Empty;
selectedMailOption = default!;
mailValue = String.Empty;
selectedTeamOption = default!;
teamValue = String.Empty;
}The mail section in Overview is the first of three FluentUI tabs that can be moved to. The page allows users to read, delete and reply to their emails. The function getMail(mailQuery) shown below is used to obtain a list of type Message containing the users emails[15]. The mailQuery parameter contains the email address the user has possibly added to the filter[16]. By default, this is set to "All". The function first authenticates the Graph client and then requests access to emails in the Inbox folder of the user. In this case, the response from Microsoft Graph contains significant unessecary data, so another QueryParameter is added so that the response only contains the required data: subject, sender, body, isRead and bodyPreview.
public async Task getMail(string addresses)
{
graphClient = clientFactory.GetAuthenticatedClient();
var mailPage = await graphClient.Me
.MailFolders["Inbox"]
.Messages
.GetAsync(config =>
{
if (addresses != "All") { config.QueryParameters.Filter = "(from/emailAddress/address) eq '" + addresses; }
config.QueryParameters.Select = new string[] { "subject", "sender", "body", "isRead", "bodyPreview" };
});
//Set to non-async variable. Can then be accessed in html.
myMessages = mailPage?.Value ?? new List<Message>();
}The user can also respond to emails in this page by calling the sendEmail function. As shown in the code snippet below, the function creates a ReplyPostRequestBody and populates it with the users input, before sending a POST request to Microsoft Graph[19]. Furthermore, as some emails may not directly be sent to the user, a check is first made to determine who the sender is and a new List<Recipient> is created if no sender is found.
public async void sendEmail(Message originEmail, string content)
{
graphClient = clientFactory.GetAuthenticatedClient();
var recipients = originEmail.ToRecipients;
var mainAddress = originEmail?.Sender?.EmailAddress?.Address;
if (recipients == null) { recipients = new List<Recipient> { new Recipient { EmailAddress = new EmailAddress { Address = mainAddress, }, }, }; }
else
{
var mainAddressSend = new Recipient { EmailAddress = new EmailAddress { Address = mainAddress, }, };
recipients.Add(mainAddressSend);
}
var requestBody = new Microsoft.Graph.Me.Messages.Item.Reply.ReplyPostRequestBody
{
Message = new Message
{
ToRecipients = recipients,
},
Comment = content,
};
emailContent = String.Empty;
emailReply = false;
await graphClient.Me.Messages[originEmail?.Id].Reply.PostAsync(requestBody);
await refreshContent(0);
}Deleting mail uses a similar post asynchronous request, but only requires the email ID. The deleteMail function below calls this request and then clears the currentEmail variable in case the currently selected email is also being deleted[20].
public async Task deleteMail(Message email)
{
graphClient = clientFactory.GetAuthenticatedClient();
await graphClient.Me.Messages[email.Id].DeleteAsync();
if (email == currentEmail) { currentEmail = null; }
await refreshContent(0);
}The other tabs accessible by the user, Teams and Chats, required obtaining data from Microsoft Teams. The getTeams(teamQuery) function was called when the tab was selected to display the users joined groups[21]. Once the user selected a team, the getChannel() function was called which then displayed all available channels[22]. However, obtaining the teams message proved to be more challenging than accessing the email data as the individual chats and messages each required a Microsoft Graph request. This could lead to hundreds of graph requests which would result in significant loading times. In order to circumvent this, a batch request was sent to Graph instead using a PostAsync method[23].
/* ASYNCHRONOUSLY GET TEAMS */
public async Task getTeams(string addresses)
{
graphClient = clientFactory.GetAuthenticatedClient();
var teamPage = await graphClient.Me.JoinedTeams.GetAsync();
if (addresses != "All")
{
var tempTeams = new List<Team>();
foreach (var myTeam in teamPage.Value) {
if (myTeam.DisplayName.ToLower() == addresses) { tempTeams.Add(myTeam); }
}
myTeams = tempTeams ?? new List<Team>();
}
else { myTeams = teamPage?.Value ?? new List<Team>(); }
}
/* ASYNCHRONOUSLY GET CHANNELS */
public async Task getChannels()
{
graphClient = clientFactory.GetAuthenticatedClient();
var channelPage = await graphClient.Teams[currentTeam.Id].Channels.GetAsync();
myChannels = channelPage?.Value ?? new List<Channel>();
}The displayChannelData function below sends a batch request to Microsoft Graph that requests all data from a particular channel[23]. The function first requests all main messages from the channel. These are initial messages in a teams channel which can have responses. A new batch request is then created and the function loops through each initial message and adds a request for the replies to the batch request. This is then sent to Microsoft Graph and a dictionary containing a HttpResponseMessage is returned. This can then be decoded to obtain all the reply data with a singular graph request. A similar process is done to obtain chat data, with the main difference being that there are no channels to filter through.
public async Task displayChannelData()
{
graphClient = clientFactory.GetAuthenticatedClient();
var myRepliesLocal = new List<JToken>();
if (currentChannel != null)
{
var convoPageTest = await graphClient.Teams[currentTeam.Id].Channels[currentChannel.Id].Messages.GetAsync();
var reversal = convoPageTest?.Value ?? new List<ChatMessage>();
reversal.Reverse();
myConvos = reversal;
var batchRequestContent = new BatchRequestContent(graphClient);
string? repliesRequestId = null;
foreach (var conversation in myConvos)
{
var replyPageTest = graphClient.Teams[currentTeam.Id].Channels[currentChannel.Id].Messages
[conversation.Id].Replies.ToGetRequestInformation();
repliesRequestId = await batchRequestContent.AddBatchRequestStepAsync(replyPageTest);
}
var returnedResponse = await graphClient.Batch.PostAsync(batchRequestContent);
try
{
var gotReplies = await returnedResponse.GetResponsesAsync();
foreach (var x in gotReplies)
{
var y = await x.Value.Content.ReadAsStringAsync();
JObject myjson = JObject.Parse(y);
JToken myToken = myjson["value"];
if (myToken?.ToString() != "[]") { myRepliesLocal.Add(myToken); }
}
myReplies = myRepliesLocal;
}
catch (Exception ex)
{
Console.WriteLine($"Getting replies failed: {ex.Message}");
}
}
}Responding to messages and creating channel messages are done very similarly to responding to emails. Instead, a new ChatMessage is created using the user input which is then sent to Graph as a PostAsync request[24]. It is either sent to the Messages section, or the Replies section depending on what action the user is performing[24][25].
public async void newChannelMessage(string content)
{
graphClient = clientFactory.GetAuthenticatedClient();
var requestBody = new ChatMessage
{
Body = new ItemBody
{
Content = content,
},
};
newChannelMessageContent = String.Empty;
newChannelMessageBool = false;
var result = await graphClient.Teams[currentTeam.Id].Channels[currentChannel.Id].Messages
.PostAsync(requestBody);
await refreshContent(1);
}
public async void replyChannelMessage(ChatMessage currMessage)
{
graphClient = clientFactory.GetAuthenticatedClient();
var requestBody = new ChatMessage
{
Body = new ItemBody
{
ContentType = BodyType.Html,
Content = "Hello World",
},
};
var result = await graphClient.Teams[currentTeam.Id].Channels[currentChannel.Id].Messages
[currMessage.Id].Replies.PostAsync(requestBody);
await refreshContent(2);
}When deleting a chat message, a soft delete is used[26]. This allows the deletion to be undone if the user wishes to do so. It also keeps the message in the display so that other users can be informed that a message was previously deleted.
// Delete Channel message
public async void delChannelMessage(ChatMessage currMessage)
{
graphClient = clientFactory.GetAuthenticatedClient();
await graphClient.Teams[currentTeam.Id].Channels[currentChannel.Id].Messages[currMessage.Id].SoftDelete.PostAsync();
await refreshContent(1);
}When using the Overview page, sending and deleting messages as well as emails requires the page to update frequently so that the newest information is available to the user. In order to do this, most functions contain a StateHasChanged() call which checks if the state of the page has changed and, if so, updates the page[27]. In order to do this periodically, a timer was created in Overview that called the asynchronous refreshContent function every 10 seconds which refreshed all data on the page.
//REFRESH EVERY 10s
private static System.Timers.Timer _timer;
private int counter = 60;
protected override void OnInitialized()
{
StartTimer();
}
public void StartTimer()
{
_timer = new System.Timers.Timer(5000);
_timer.Elapsed += CountDownTimer;
_timer.Enabled = true;
}
public async void CountDownTimer(Object source, System.Timers.ElapsedEventArgs e)
{
if (counter > 0) { counter -= 1; }
else { _timer.Enabled = false; }
await refreshContent(4);
}
Figure 5 (b): Final Calendar Page.
The calendar is strategically rendered from scratch. It always has 7 columns, one for each day of the week and the days are rendered in relation to that. The CalculateNumberOfEmptyDays function calculates the number of empty days at the beginning of the month. It does this by creating an object of type DateTime to store the first day of that month and its day of the week. The day of the week is represented as an enumeration (Sunday is 7 and Saturday is 0). The calendar starts with Sunday, so if the weekdaynumber is 7, there are no empty tiles before it so the function returns 0 else, it returns the day of the week number. The actual blank tiles are generated using the GenerateEmptyDays function. The number of rows are calculated by just diving the total number of days by 7.
public async Task LoadCalendar()
{
List<CalendarDay> GenerateEmptyDays(int numberOfEmptyDays)
{
return Enumerable.Range(0, numberOfEmptyDays)
.Select(_ => new CalendarDay { DayNumber = 0, IsEmpty = true })
.ToList();
}
List<CalendarDay> GenerateMonthDays()
{
int numberOfDaysInMonth = DateTime.DaysInMonth(year, month);
return Enumerable.Range(1, numberOfDaysInMonth)
.Select(day => new CalendarDay
{
DayNumber = day,
IsEmpty = false,
Date = new DateTime(year, month, day),
dayEvents = new List<Microsoft.Graph.Beta.Models.Event>()
})
.ToList();
}
int CalculateNumberOfEmptyDays()
{
var firstDayDate = new DateTime(year, month, 1);
int weekDayNumber = (int)firstDayDate.DayOfWeek;
return weekDayNumber == 7 ? 0 : weekDayNumber;
}
int CalculateRowsCount(int totalDays)
{
return totalDays % 7 == 0
? totalDays / 7
: Convert.ToInt32(totalDays / 7) + 1;
}
days = new List<CalendarDay>();
int numberOfEmptyDays = CalculateNumberOfEmptyDays();
days
.AddRange(GenerateEmptyDays(numberOfEmptyDays));
days
.AddRange(GenerateMonthDays());
rowsCount =
CalculateRowsCount(days.Count);
foreach (var day in days)
{
day.dayEvents = getEventsofToday(day);
}
}Each tile is of type CalendarDay. A call is made using Graph API and events for each day is stored in dayEvents[28]. The tiles, as well as the day number are rendered using builder functions to dynamically change the tile layout structure for each month. The tiles contain two pieces of information: The day number and the events for each day. The Calendar page calls the Microsoft Graph API to fetch the events in the user’s calendar. It stores the query results into a global list called events and for each day, the list is filtered and appended to dayEvents. For each tile the user selects, the events for that day are rendered as a list. If the event is a meeting, the row block is green outlined. Upon clicking on the event, the user is able to get the meeting summary and can also write it to OneNote.
[Parameter]
public RenderFragment<CalendarDay> DayTemplate { get; set; } = DefaultDayTemplate;
private static RenderFragment<CalendarDay> DefaultDayTemplate = (day) => builder =>
{
if (day.DayNumber != 0)
{
builder.OpenElement(1, "div"); // Create a container div for the day
builder.OpenElement(2, "div"); // Create a div for the day number
builder.AddContent(3, day.DayNumber);
builder.CloseElement();
// Display the events for the day
if (day.dayEvents != null)
{
if (day.dayEvents.Count > 0)
{
builder.OpenElement(4, "div"); // Create a div for ellipses
// make the ellipses grey highlight that's in line with content 3 : day number
@* builder.AddContent(5, "..."); *@
builder.AddAttribute(5, "style", "background-color: orange; border-radius: 50%; width: 10px; height: 10px; margin: 0 auto;");
builder.CloseElement();
}
}
builder.CloseElement(); // Close the container div
}
};Obtaining the call Transcript contents from the Graph API was not available during development. In order to access this data, the beta version of the API was used instead[29]. This required re-implementing all authentication files using Microsoft.Graph.Beta so that the beta endpoint could be accessed[30]. The GetContent function below is the main entry point once the user clicks on the transcript, this calls the SummaryHelperFunction which extracts the transcript and sends it to summariseText() to be summarized by Azure OpenAI GPT 3.5[39].
protected async void GetContent(string meetingLink)
{
OnOpen();
string transcriptContent;
transcriptContent = await SummaryHelperFunction(transcriptContent);
await summariseText(transcriptContent);
StateHasChanged();
}The SummaryHelperFunction, takes in as an argument the meetingLink. From the link it attempts to get the OnlineMeeting resource from the Graph API[37]. Using this meeting resource, the meetingID can then be used to get the Transcript resource type which can subsequently be used to to extract the content in the format of a Stream[36]. A StreamReader then converts this into text which can be processed by the Azure OpenAI. The AI Summarize function runs similarly to the SmartScribe function, with the difference being that the prompt has been re-engineered to fit the different content it is receiving.
public string ConvertWebVttStreamToString(Stream? stream)
{
using (StreamReader reader = new StreamReader(stream, Encoding.UTF8))
{
return reader.ReadToEnd();
}
}
protected async Task<string> SummaryHelperFunction(string meetingLink)
{
var originalString = "JoinWebUrl eq '1234'";
string modifiedLink = originalString.Replace("'1234'", $"'{meetingLink}'");
try
{
var me = await graphClient.Me.GetAsync();
var meetingInfo = await graphClient.Me.OnlineMeetings.GetAsync((requestConfiguration) =>
{
requestConfiguration
.QueryParameters
.Filter = modifiedLink;
});
var transcripts2 = await graphClient
.Users[me.Id]
.OnlineMeetings[meetingInfo.Value[0].Id]
.Transcripts
.GetAsync();
var requestInformation = graphClient
.Users[me.Id]
.OnlineMeetings[meetingInfo.Value[0].Id]
.Transcripts[transcripts2.Value[0].Id]
.Content
.ToGetRequestInformation();
requestInformation.UrlTemplate += "{?format}"; // Add the format query parameter to the template and query parameter.
requestInformation.QueryParameters.Add("format", "text/vtt");
var transcript_stream = await graphClient.RequestAdapter.SendPrimitiveAsync<Stream>(requestInformation);
var transcript_content = ConvertWebVttStreamToString(transcript_stream);
return transcript_content;
}
catch
{
}
return "Unable to get Transcript";
}The SearchBar feature was created with the goal of simplifying the content retrieval method for all users by providing access to all Microsoft 365 data in one portal. The SearchBar works by requesting the Search endpoint in Microsoft Graph[31][34] for Outlook, Teams and OneDrive and performing batched requests[33] on the content the user has entered via an input tag[32]. Once the data is fetched via the API, it is loaded into a List<Author> variable which is then rendered on the dialog. The Author class consists of 4 attributes, shown in the code below, which are all rendered on the screen.
protected async Task SummariseText(string content)
{
public class Author
{
public string Sender { get; }
public string Title { get; }
public string Url { get; }
public string App { get; }
public Author(string sender, string title, string webLink, string app)
{
Sender = sender;
Title = title;
Url = webLink;
App = app;
}
}
}The request API for OneDrive is a QueryPostRequest with the searchTerm being the string requested by the user[32]. It has additional features added such as EnableModification which will select similar search results in case there is a misspelling in the user’s search term.
var requestBodyDrive = new Microsoft.Graph.Search.Query.QueryPostRequestBody
{
Requests = new List<SearchRequest>
{
new SearchRequest
{
EntityTypes = new List<EntityType?>
{
EntityType.DriveItem,
},
Query = new SearchQuery
{
QueryString = searchTerm,
},
QueryAlterationOptions = new SearchAlterationOptions
{
EnableModification = true,
}
},
},
};Once the response is received, it is then processed to extract the information and stored into an instance of the Author class. This is then appended to the list rendered on the screen.
try
{
var user1 = await returnedResponse
.GetResponseByIdAsync<Microsoft
.Graph
.Search
.Query
.QueryResponse>(driveId);
if (user1?.Value?[0]?.HitsContainers?[0]?.Total == 0)
{
Console.WriteLine("Nothing found in OneDrive");
}
else
{
for (int i = 0; i < user1?.Value?[0]?.HitsContainers?[0]?.Hits?.Count; i++)
{
var searchItems = user1?
.Value?[0]?
.HitsContainers?[0]?
.Hits?[i]?
.Resource?
.AdditionalData;
JsonElement lastModifiedBy = (JsonElement)searchItems["lastModifiedBy"];
string oneDriveCreator = lastModifiedBy
.GetProperty("user")
.GetProperty("displayName")
.GetString();
data_response
.Add(new Author(oneDriveCreator, searchItems?["name"]
.ToString(), searchItems["webUrl"]
.ToString(), "OneDrive"));
Console.WriteLine("found in drive");
}
}
}
catch (ServiceException ex)
{
Console.WriteLine("Get OneDrive Failed" + ex);
}
Figure 7 (b): Final SmartScribe Page.
Initially the SmartScribe page was created as a proof of concept of OpenAI usage[39]. The page was intended for summarising sections of text within the application, but it proved effective enough to be developed further. New features were added that allowed a user to upload a file from their local drive or OneDrive[38], summarise it and then upload it to their OneNote for later use. The initial challenge for this page was engineering a prompt that effectively reduced the length of the input text whilst still retaining the semantics and technical language. This required specifying the AI to keep all equations and respond in a technical manner. To input into the prompt, the information had to be split into multiple sections, partly to fit the data into a character limit for the prompt, but also to ensure that the generative AI did not oversummarise and still retained the relevant information.
protected async Task SummariseText(string content)
{
// Perform summarisation until the overall summary is less than 3000 characters
var summaries = new List<string>();
var overallSummary = "";
Console.WriteLine("Inside ai: " + content);
// Split the content into smaller chunks
var chunkSize = 750;
var chunks = SplitContentIntoChunks(content, chunkSize);
// Perform summarisation for each chunk
foreach (var chunk in chunks){
string prompt = $"Summarise the following text in a professional and business like manner and in extreme detail (Include any equations mentioned):\n\n{chunk}\n";
var completionFinish = await callOpenAI(prompt);
summaries.Add(completionFinish[0]);
summaries.Add(completionFinish[1]);
summaries.Add(completionFinish[2]);
}
overallSummary = string.Join(" ", summaries);
summaries.Clear();
summary = overallSummary;
Console.WriteLine("Generated AI summary");
}
// Helper method to split the content into smaller chunks
private IEnumerable<string> SplitContentIntoChunks(string content, int chunkSize)
{
var sentences = content.Split('.', '!', '?');
var currentChunk = new StringBuilder();
foreach (var sentence in sentences)
{
if (currentChunk.Length + sentence.Length + 1 <= chunkSize)
{
currentChunk.Append(sentence).Append('.');
}
else
{
yield return currentChunk.ToString();
currentChunk.Clear().Append(sentence).Append('.');
}
}
if (currentChunk.Length > 0)
{
yield return currentChunk.ToString();
}
}An issue encountered relating to the output from the AI was the character limit that would often truncate the summary received. To bypass this issue, it was necessary to request the AI to complete the response twice to ensure the output was a complete sentence.
private async Task<List<string>> callOpenAI(string prompt)
{
var summaries = new List<string>();
var completionsResponse = await OpenAIService.client.GetCompletionsAsync(OpenAIService.engine, prompt);
var completion = completionsResponse.Value.Choices[0].Text;
summaries.Add(completion);
string finish = $"Keep summarising in extreme detail:\n\n{prompt}\n\nGave summary:\n\n{completion}";
var completionsResponseFinish = await OpenAIService.client.GetCompletionsAsync(OpenAIService.engine, finish);
var completionFinish = completionsResponseFinish.Value.Choices[0].Text;
summaries.Add(completionFinish);
string finish2 = $"Keep summarising in extreme detail:\n\n{prompt}\n\nGave summary:\n\n{completionFinish}";
var completionsResponseFinish2 = await OpenAIService.client.GetCompletionsAsync(OpenAIService.engine, finish2);
var completionFinish2 = completionsResponseFinish2.Value.Choices[0].Text;
summaries.Add(completionFinish2);
return summaries;
}To further improve the functionality of this page, the ability upload other file formats was added such as PowerPoint and Word documents. Implementing new file formats required adding unique methods of reading for each type of file.
public string ExtractTextFromPDF(Stream stream)
{
var document = new GcPdfDocument();
document.Load(stream);
var text = document.GetText();
return text;
}
private string ExtractTextFromWord(Stream stream)
{
var text = "";
using (var document = WordprocessingDocument.Open(stream, false))
{
var body = document.MainDocumentPart.Document.Body;
text = body.InnerText;
}
return text;
}
private string ExtractTextFromPowerPoint(Stream stream)
{
var text = "";
using (var presentation = PresentationDocument.Open(stream, false))
{
var slideText = "";
foreach (var slidePart in presentation.PresentationPart.SlideParts)
{
var slide = slidePart.Slide;
var paragraphs = slide.Descendants<DocumentFormat.OpenXml.Drawing.Paragraph>();
foreach (var paragraph in paragraphs)
{
slideText += paragraph.InnerText + " ";
}
}
text = slideText.Trim();
}
return text;
}The next consideration was how to effectively allow a user to include the summarisation in their notes. The primary options for this task were Microsoft Word and OneNote. As OneNote is more geared towards note-taking, it was chosen as the main method of saving summarisations. This also synchronised well with the inherent capability of OneNote to upload lecture slides. OneNotes integration with Microsoft Graph was, however, more limited to creating simple Notebooks and Sections, rather than writing directly to a page. The issue was presented to Microsoft in the weekly catch up call and a custom workaround was provided. Potential future improvements in this could be to allow the user to give feedback regarding the summary and use this to edit the prompt in some way[41].
var presentation = new StringContent(htmlString, Encoding.UTF8, "text/html");
multipartContent.Add(presentation, "Presentation");//needs a name
// We can add more httpcontent instance here if we wish to
// create a request information instance and make a request.
var requestInformation = graphClient.Me.Onenote.Sections[sectionId].Pages.ToGetRequestInformation();
requestInformation.Headers.Add("Content-Type", multipartContent.Headers.ContentType.ToString());
requestInformation.HttpMethod = Method.POST;
requestInformation.Content = await multipartContent.ReadAsStreamAsync();
var errorMapping = new Dictionary<string, ParsableFactory<IParsable>> {
{"4XX", ODataError.CreateFromDiscriminatorValue},
{"5XX", ODataError.CreateFromDiscriminatorValue},
};
var pageResult = await graphClient.RequestAdapter.SendAsync<OnenotePage>(requestInformation, OnenotePage.CreateFromDiscriminatorValue, errorMapping);
Figure 6 (b): Final Application Tracker Page.
The Application Tracker page was created to aid job applicants in keeping track of their applications. Due to this, a significant amount of user data was required to be stored for future sessions, and thus this data was transferred to the Web API Backend, which in turn stored the data inside a MongoDB database. A BackendAPIService class was created, which the Application Tracker module used as a layer of abstraction to seamlessly communicate with the backend. This class consisted of asynchronous methods such as the method shown in function getUserTimelineAsync, which abstracts the process of making HTTPS requests to the backend. This component uses classes defined in a folder named SharedModels, which allowed both the backend and frontend to use the same classes to represent objects related to the Job Applications.
public async Task<ApplicationTimeline> getUserTimelineAsync(string username, int timelineID)
{
var response = await httpClient.GetAsync($"https://localhost:7023/api/JobApplicants/get-timeline/
{username}/{timelineID}");
string responseBody = await response.Content.ReadAsStringAsync();
ApplicationTimeline timelines = BsonSerializer.Deserialize<ApplicationTimeline>(responseBody);
return timelines;
}The rendering loop of this page is programmed to start by fetching all the user application data from the backend. Each time a user makes changes to the data, the button click triggers a function which initially alters the data in the client-side runtime (frontend). This allows the UI to feel more responsive to user click events. The function then makes a post request using the BackendAPIService, so that the corresponding data is updated in the database. If the response is successful and the data has been successfully updated in the backend, the data in the frontend remains the same. If the response indicates a failed request, then the change which was done directly after the user input is reverted, since the information was not successfully updated in the database. This design choice keeps data consistent between frontend and backend, along with providing a responsive UI. The simple function shows below, which updates an applications archived status, highlights this.
private async Task UpdateArchivedStatus(ApplicationTimeline timeline, bool newStatus)
{
bool oldStatus = timeline.archived;
timeline.archived = newStatus;
HandleSortCriterionChange();
StateHasChanged();
bool result = await backendApiService.updateArchivedStatus(timeline.timelineID, Username, newStatus);
if(!result)
{
timeline.archived = oldStatus;
HandleSortCriterionChange();
StateHasChanged();
}
}The chatbot was developed to allow the user access to Azure OpenAI's GPT-3.5 model[44][45] through the web app. The chatbot sends prompts and receives responses from the AI model. Additionally, it can also auto-generate to-do lists and deliver the lists to the user's ToDo account. When users click on the create To Do button and input their plans for the day in standard sentences, the model can output a list of tasks along with sensible times to complete these tasks. The function displayed in the code snippet below, shows the prompt that was used to create the ToDo list and the method in which it was parsed. The requests containing the tasks and the corresponding times are then sent to Graph which in turn sends a HTTPS request to update the Todo for the Microsoft account.
private async Task ConvertTasks(string textValue)
{
creating = true;
Console.WriteLine("Entered: " + textValue);
textValue = textValue + " |||";
string prompt = $"Can you create a to-do list including a suitable start time in the form of hh:mm for today according to my plan/plans below, it is not always written in chronological order. List each todo task with a ‘+’ where the time is written first and the task is written next to it, separated by ‘>’ and rewrite some tasks to make them clearer and more precise, the plans list ends with '|||' : \n {textValue}";
Console.Write($"Input: {prompt}\n");
var completionsResponse = await openAIService.client.GetCompletionsAsync(openAIService.engine, prompt);
var completion = completionsResponse.Value.Choices[0].Text;
Console.WriteLine(completion);
todolist = completion.Split("\n");
// Parse the list of AI generated tasks and times
foreach (string todo in todolist)
{
if (todo.Contains("+") && todo.Contains(">"))
{
todoStart = todo.Split(">")[0].Split("+")[1];
//todoStart = todo.Split(">")[0];
todoTasks = todo.Split(">")[1];
TaskDict[todoStart] = todoTasks;
await SendToDo(todoTasks, todoStart);
}
}
creating = false;
}Furthermore, the chatbot integrates a weather API to fetch current weather data based on the user's location, extracting the location information from user IP addresses as seen from the GetWeather function.
private async Task GetWeather(){
HttpClient client = new HttpClient();
//Get IP address of user
var ip = await client.GetStringAsync("https://api.ipify.org");
string myIP = ip.ToString();
//Get location of user from IP address
var loc = await client.GetStringAsync("https://api.ipgeolocation.io/ipgeo?apiKey=INSERT_API_KEY_HERE&ip="+ myIP);
dynamic locationUser = JsonConvert.DeserializeObject(loc);
//Get weather data from location of user
var response = await client.GetAsync("https://weather.visualcrossing.com/VisualCrossingWebServices/rest/
services/timeline/"+ locationUser.city +"?key=INSERT_API_KEY_HERE");
response.EnsureSuccessStatusCode(); // Throw an exception if error
weatherData = await response.Content.ReadAsStringAsync();
dynamic weather = JsonConvert.DeserializeObject(weatherData);
// Get most recent weather conditions
string weather_date = weather.days[0].datetime;
string weather_desc = weather.days[0].description;
string weather_tmaxF = weather.days[0].tempmax;
string weather_tmaxC = FarenheitToCelsius(weather_tmaxF);
string weather_tminF = weather.days[0].tempmin;
string weather_tminC = FarenheitToCelsius(weather_tminF);
// Construct weather reply
weatherReply = "The date is " + weather_date + " \nGeneral conditions: " + weather_desc + "\nThe high temperature will be " + weather_tmaxC + " ºC\nThe low temperature will be: " + weather_tminC + " ºC" ;
}