Hello, My name is Onur Sahil. I have studied Computer Science and Engineering in South Korea. I learned the Korean language for a year. While I was in school I did a project that is called Smart Grading System that takes students writing as an image, recognizes the handwriting of the student and compares it with the correct answers to grade the paper. I became 8th in the major with this graduation project and awarded $250. At the same time, I started doing an internship and worked on Natural Language Processing and developed a deep learning-based chatbot system. After my graduation, I started working as a full-time employee at a startup in South Korea which works on Natural Language Processing and data analysis. Here I worked on machine reading comprehension as a Q&A system based on Korean news data. I used Google's BERT, fine-tuned for English and Korean models. Additionally, I changed the tokenization method in BERT which is originally WordPiece tokenizer to Sentencepiece tokenizer. The reason why I changed the WordPiece tokenizer to Sentencepiece tokenizer is Sentencepiece tokenizer has a probability-based tokenization method built-in. Besides, I added the LAMB optimizer for faster and more accurate models. Since it based on news data, I built a news crawling system using Selenium and built it inside EFK stack(Elasticsearch, Fluentd, Kibana) with Docker.