Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- #!/usr/bin/env python3
- # -*- coding: utf-8 -*-
- # Filename: nlp_tokenization_nltk.py
- # Author: Jeoi Reqi
- """
- This script performs tokenization on a given text using NLTK.
- Requirements:
- - Python 3
- - NLTK library with the 'punkt' resource
- Usage:
- - Run the script, and it will print the tokenized words and sentences of the provided text.
- Example:
- python tokenization_nltk.py
- Output:
- Tokenized Words: ['Natural', 'Language', 'Processing', 'is', 'a', 'fascinating', 'field', '.', 'It', 'involves', 'the', 'use', 'of', 'computers', 'to', 'understand', 'and', 'process', 'human', 'language', '.']
- Tokenized Sentences: ['Natural Language Processing is a fascinating field.', 'It involves the use of computers to understand and process human language.']
- """
- import nltk
- from nltk.tokenize import word_tokenize, sent_tokenize
- # Download the 'punkt' resource
- nltk.download('punkt')
- # Sample text
- text = "Natural Language Processing is a fascinating field. It involves the use of computers to understand and process human language."
- # Tokenize the text
- words = word_tokenize(text)
- sentences = sent_tokenize(text)
- print("Tokenized Words:", words)
- print("Tokenized Sentences:", sentences)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement