Replies: 1 comment 1 reply
-
Hello, so do I understand it right that you want to store the links you find on a page and only filter and enqueue them after you gather a bigger batch? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey! Love
crawlee
so far. The main issue I am facing is I want to filtering the urls to crawl for a given page using LLMs. Is there a clean way to do this? So far I implemented a transformer forenqueue_links
which saves the links to a dict and then process those dicts at a later point of time using another crawler object. Any other suggestions to solve this problem? I don't want to make the llm call in the transform function since that would be an LLM call per URL found which is quite expensive.Beta Was this translation helpful? Give feedback.
All reactions