Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error after processing through many #27

Open
justkorix opened this issue Jan 19, 2023 · 11 comments
Open

Error after processing through many #27

justkorix opened this issue Jan 19, 2023 · 11 comments

Comments

@justkorix
Copy link

Was going great but errored out after a while, not sure what happened

Traceback (most recent call last):
  File "/Users/kori/Netflix-to-Trakt-Import/netflix2trakt.py", line 95, in <module>
    tmdbResult = tmdbSeason.details(tv_id=showId, season_num=season.number, append_to_response="translations")
  File "/Users/kori/Library/Python/3.9/lib/python/site-packages/tmdbv3api/objs/season.py", line 29, in details
    return AsObj(
  File "/Users/kori/Library/Python/3.9/lib/python/site-packages/tmdbv3api/as_obj.py", line 9, in __init__
    raise TMDbException(entries["status_message"])
@jensb89
Copy link
Owner

jensb89 commented Jan 21, 2023

Do you use the latest version? I have fixed a similar bug lately.

If yes, do you know at which series/movie it fails. Then I can try to reproduce and debug.

@justkorix
Copy link
Author

Yeah it's the latest version, not sure where it's getting knocked out. If you can tell me where I can grab the log I'd be happy to upload it for you to see where the error is coming in

@jensb89
Copy link
Owner

jensb89 commented Jan 21, 2023

There should be a file "Netflix2TraktImportLog.log" in the folder.

Additionally, you could set TMDB-DEBUG to True in the config.py file for more infos in that log file.

Or alternatively you could simply send me your csv file (or part of it) here so I could try to reproduce the error with that. Thanks!

@Nackophilz
Copy link

Same problem here

When this is a big file he cant process trough all (maybe a limitation of api call on TMDB ? )

@Nackophilz
Copy link

This is definitely because of large CSV file.

I add to the python script a function to parse one CSV file into one CSV file of 500 lines max (as many as i need)

then loop on each new csv file to work

And voila ! :)

@justkorix
Copy link
Author

Any chance I could snag that script? Would love to get the rest added to Trakt!

@ahknight
Copy link

When I ran into this error it's because it was searching for season "None" and on line 95 it was calling tmdbSeason.details outside of a try block. I added the try block (mirroring the one above it for tmdbSeason.details on line 72) and it skipped over an entry in the CSV that looked like:

ShowName: NameOfFirstHalf / NameOfSecondHalf

It should probably default to searching on season 1 if season is None.

@jensb89
Copy link
Owner

jensb89 commented Feb 2, 2023

Thanks for the updates @ALL.

A few thoughts on this:

  • There should not be a searching for "None" on line 95, because the code there is only executed for season.number != None. So I think that is not a problem. Skipping of some items can happen sometimes though (but rarely due to schema in the csv that is not easy to match)
  • A try block around the TMDB Api call is still a good idea! I will implement that in the coming days
  • The Tmdb Api has no rate limits, but some people in some forums write that their CDNs have some basic rate limits of 50 requests/second (=each 20ms). We have a 100ms sleep time, so should be outside the range. But it seems it might still be a problem
  • The sync to the TraktAPI is already cutting to max 1000 elements for each call, but the TMDB api search is currently done for all items first
  • From what I read, it might be good to break the whole script after around 500 items and sync them to Trakt and then start with the next 500 (maybe after a break of 1-2s)
  • The above idea requires some more restructuring though. When I find some time I will try to change it that way.

For now, it seems the best thing is to split large files into separate smaller files, until the above mentioned fix is ready :)

@SawkeeReemo
Copy link

https://hastebin.com/share/ilocahifuc.java

Having this problem as well. It's happening so often, that this is sort of unusable at the moment.
NetflixViewingHistory.csv

@SawkeeReemo
Copy link

Quick update: Since I had over 10 years worth of Netflix history to import, I switched to grabbing the more recent stuff first and copying that into a new CSV. I was able to get through most of it, which is good enough for me right now, but if I can help provide more data in any way to help you squash this bug, I'd be happy to. Great script you've written here, must be a pain to parse all this out.

@jensb89
Copy link
Owner

jensb89 commented Nov 12, 2023

Thanks for the updates @SawkeeReemo.
Yes, splitting into smaller CSVs is the solution here currently. It's good to know that it works for you and the code is helpful to many people. In general the code still works fine for files that are not too large. There are still some open points. For example, checking if the same entry was already uploaded would be a nice feature to avoid double entries in trakt. The large file problem is another.
However, some of these require a bigger code change and more testing. I hope to find some time for that soon. Still,
any help here is appreciated :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants