SE Project Paper

Recycling Assistant

Please refer below for the Work In Progress PDF for the Recycling Assistant.

Link to the .tex file: recycling-assistant-paper-WIP.tex



SE Brief Introduction

Kim Soohyun

Recycling Assistant
SE, OpenCV, TACO, WasteNet

Objectives

The purpose of our project is to design an artificial intelligence model that helps people separate recycling in real time. If you show the camera the trash to recycle in real time, it tells you how to recycle it.

Applications

This model can be placed in recycling grounds used by many people, including apartments, share houses, and companies. If many people throw away trash, it can cause confusion because the garbage is not properly classified. Our model can prevent those confusion and present an accurate recycling method for the environment.

Simplified Procedures

Input Modules that help Classification Using AI Technology Output
An object to the camera in real-time OpenCV, TACO, WasteNet Recycling Statistics Feedback to the user which category the object has to be recycled

Primary Components

Recycling Classification

The Recycling Classification will be designed using a module like TACO and WasteNet. Also using OpenCV, we will use Computer Vision technology.

Recycling Statistics

It provides recycling statistics, such as recycled objects, which are the most discarded in the user group. By providing the statistics, we can enhance the recycling awareness of the user group.



AI Brief Introduction

Lim Hongrok

Sentiment Analysis on Live Transcription AI, NUGU SDK

NUGU Inside

Objectives

The objective of the project is to portray the sentiment of a conversation, while providing a transcription of the audio - while the software transcribes the conversation, the application will estimate the speakers’ underlying emotion, or sentiment.

Applications

An example of where it could be used is a group meeting. The converstion would be transcribed, and the sentiment classified per the speaker’s spoken words. The transcribed texts and result of sentiment analysis would be available in a text format after the meeting is over, but also available on the screen in real-time.

Simplified Procedures

Conversation in Audio Transcription in Text Sentiment Analysis Text & Analysis Result
Input NUGU SDK AI Output

Primary Components

Sentiment Analysis

The Sentiment Analysis model used in this application will be trained by us, in order to gauge the sentiment given textual content.

Live Transcription

The Transcription aspect will be handled by an external API (ie. NUGU). In order to provide results to the screen, as well as input for the Sentiment Analysis portion in real-time, appropriate streaming I/O must be used (ie. HTTP/2 or Websocket Usage).



AI Project Proposal

DW Chung

Live Transcription w/ Sentiment Analysis

This project is aiming to be able to transcribe audio and determine the sentiment of the conversation. Utilizing the NUGU SDK, speech recognition will be leveraged in order to transcribe audio (speech-to-text) - as the NUGU SDK already provides an ASR/STT function. Sentiment Analysis will be utilized in order to portray the sentiment of the transcribed audio.

The NUGU SDK provides plugins such as gstreamer and portaudio, which can be used in order to receive audio input. Additional external APIs may also be leveraged in order to increase accuracy of the transcribed audio; however, full accuracy is not necessarily our goal – it only needs to be as accurate as to provide the same general gist and sentiment. A database may also be maintained in order to provide transcription and sentiment logs.

We envision the final product to be more focused on the features provided by the NUGU SDK rather than the NUGU Candle – however, if the NUGU Candle can be leveraged in a way to assist in matters such as sentiment portrayal, or other relevant features, we hope to be able to incorporate those additional features into the product.