5

First of all, I am new to python/nltk so my apologies if the question is too basic. I have a large file that I am trying to tokenize; I get memory errors.

One solution I've read about is to read the file one line at a time, which makes sense, however, when doing that, I get the error cannot concatenate 'str' and 'list' objects. I am not sure why that error is displayed since (after reading the file, I check its type and it is in fact a string.

I have tried to split the 7MB files into 4 smaller ones, and when running that, I get: error: failed to write data to stream.

Finally, when trying a very small sample of the file (100KB or less), and running the modified code, I am able to tokenize the file.

Any insights into what's happening? Thank you.

# tokenizing large file one line at a time
import nltk
filename=open("X:\MyFile.txt","r").read()
type(raw) #str
tokens = '' 
for line in filename
        tokens+=nltk.word_tokenize(filename)
#cannot concatenate 'str' and 'list' objects

The following works with small file:

import nltk
filename=open("X:\MyFile.txt","r").read()
type(raw)
tokens = nltk.word.tokenize(filename)
Ben
  • 51,770
  • 36
  • 127
  • 149
Luis Miguel
  • 5,057
  • 8
  • 42
  • 75

2 Answers2

11

Problem n°1: You are iterating the file char by char like that. If you want to read every line efficiently simply open the file (don't read it) and iterate over file.readlines() as follows.

Problem n°2: The word_tokenize function returns a list of tokens, so you were trying to sum a str to a list of tokens. You first have to transform the list into a string and then you can sum it to another string. I'm going to use the join function to do that. Replace the comma in my code with the char you want to use as glue/separator.

import nltk
filename=open("X:\MyFile.txt","r")
type(raw) #str
tokens = '' 
for line in filename.readlines():
    tokens+=",".join(nltk.word_tokenize(line))

If instead you need the tokens in a list simply do:

import nltk
filename=open("X:\MyFile.txt","r")
type(raw) #str
tokens = []
for line in filename.readlines():
    tokens+=nltk.word_tokenize(line)

Hope that helps!

luke14free
  • 2,529
  • 1
  • 17
  • 25
  • 3
    But be aware that `word_tokenize` assumes that it's running on a single sentence at a time, so this will give you some tokenization errors. Really you need to read a chunk of the file, split it with `sent_tokenize`, then pass that to `word_tokenize`. Which is a pain if you need to read line by line, and your sentences break across lines. So you might prefer to just live with the imperfections for now... – alexis Mar 28 '12 at 16:06
  • 1
    Yes, my code is based on the fairly strong assumption that you cannot find a \n in the middle of a sentence. – luke14free Mar 28 '12 at 19:15
0

In python, files act as iterators. So you can simply iterate over the file without having to call any methods on it. This would return one line per iteration.

Problem 1: You have created tokens as a string while word_tokenize() returns a list.

Problem 2: Simply open the file for reading by open('filename',"r").

import nltk
f=open("X:\MyFile.txt","r")
tokens=[]
for line in f:
    tokens+=nltk.word_tokenize(line)
print tokens
f.close()
Kalyan Kumar
  • 103
  • 1