Ecosyste.ms: Timeline

Browse the timeline of events for every public repo on GitHub. Data updated hourly from GH Archive.

BerriAI/litellm

xingyaoww created a comment on an issue on BerriAI/litellm
``` 18:08:22 - openhands:INFO: main.py:157 - Agent Controller Initialized: Running agent CodeActAgent, model anthropic/claude-3-5-sonnet-20241022, with actions: **MessageAction** (source=None) CO...

View on GitHub

dependabot[bot] pushed 28 commits to dependabot/github_actions/github-actions-c6717330ff BerriAI/litellm
  • add code cov checks ae01de0
  • fix comment 85f1e5c
  • perf: remove 'always_read_redis' - adding +830ms on each llm call (#6414) * perf: remove 'always_read_redis' - addin... d59f8f9
  • feat(litellm_logging.py): refactor standard_logging_payload function … (#6388) * feat(litellm_logging.py): refactor ... c04c4a8
  • LiteLLM Minor Fixes & Improvements (10/23/2024) (#6407) * docs(bedrock.md): clarify bedrock auth in litellm docs ... 1cd1d23
  • track created, updated at virtual keys cce118e
  • add created_at, updated_at for verification token 8cd8d47
  • allow configuring httpx hooks for AsyncHTTPHandler (#6290) (#6415) * allow configuring httpx hooks for AsyncHTTPHand... cc8dd80
  • feat(proxy_server.py): check if views exist on proxy server startup +… (#6360) * feat(proxy_server.py): check if vie... 4e31005
  • feat(litellm_pre_call_utils.py): support 'add_user_information_to_llm… (#6390) * feat(litellm_pre_call_utils.py): su... 9fccf82
  • ui show created at date 2d2a2c3
  • add key/{token_id}/regenerate to internal user routes c4cab88
  • use static methods for Routechecks 2e0f501
  • use helper for _route_matches_pattern cdb94ff
  • test_is_ui_route_allowed 574f07d
  • fix name of tests on config c42ec81
  • unit test route checks 7db8d8b
  • fix RouteChecks test 99c7211
  • Merge pull request #6429 from BerriAI/litellm_ui_show_created_at_for_key (admin ui) - show created_at for virtual keys 5485a2a
  • fix typing on StandardLoggingMetadata 7c4c3a2
  • and 8 more ...

View on GitHub

krrishdholakia created a comment on an issue on BerriAI/litellm
able to repro

View on GitHub

xingyaoww created a comment on an issue on BerriAI/litellm
will test it now!

View on GitHub

krrishdholakia created a comment on an issue on BerriAI/litellm
Fixed here - https://github.com/BerriAI/litellm/commit/8f37b2266717e5cfaafbac6a9c2a2bdc293bf01b @xingyaoww if my changes look okay to you, can we go with that instead - preferred as it includes ...

View on GitHub

krrishdholakia pushed 1 commit to litellm_krrish_dev_10_25_2024 BerriAI/litellm
  • fix(factory.py): support anthropic prompt caching for tool results 58fe661

View on GitHub

beat created a comment on an issue on BerriAI/litellm
confirming missing!

View on GitHub

beat created a comment on an issue on BerriAI/litellm
Confirming that these views are missing, and that running that script fixes the broken "Usage" view. Was searching for hours until I found this bug ticket. Many Many Thanks! Hope that this is added...

View on GitHub

xingyaoww created a comment on an issue on BerriAI/litellm
ahh - I set that inside `message.content`. So this one: https://github.com/BerriAI/litellm/issues/6422#issuecomment-2438757294 <img width="608" alt="image" src="https://github.com/user-attachmen...

View on GitHub

krrishdholakia created a comment on an issue on BerriAI/litellm
their docs imply this should be working (from within the content list) <img width="752" alt="Screenshot 2024-10-25 at 2 09 21 PM" src="https://github.com/user-attachments/assets/0802f9a2-ca2f-485...

View on GitHub

krrishdholakia created a comment on an issue on BerriAI/litellm
interesting - it fails for me <img width="1525" alt="Screenshot 2024-10-25 at 2 07 34 PM" src="https://github.com/user-attachments/assets/ed4149d6-c6be-4fda-ba6b-7f5ac95434c7">

View on GitHub

xingyaoww created a comment on an issue on BerriAI/litellm
I was successful with this: ``` {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01V1paXrun4CVetdAGiQaZG5', 'content': [{'type': 'text', 'text': 'OBSERVATION:\nOn bra...

View on GitHub

krrishdholakia created a comment on an issue on BerriAI/litellm
<img width="1153" alt="Screenshot 2024-10-25 at 2 02 32 PM" src="https://github.com/user-attachments/assets/a1eebdbb-189e-47b4-9cc5-2193591fffca">

View on GitHub

krrishdholakia created a comment on an issue on BerriAI/litellm
Anthropic rejects this message as well 😢 ``` {'role': 'user', 'content': [{'type': 'tool_result', 'tool_use_id': 'toolu_01V1paXrun4CVetdAGiQaZG5', 'content': [{'type': 'text', 'text': 'OBSERVATI...

View on GitHub

xingyaoww created a comment on an issue on BerriAI/litellm
You mean to have a user set the cache at the message level? I'm ok with that - as long as we can do the caching properly!

View on GitHub

krrishdholakia created a comment on an issue on BerriAI/litellm
hmm isn't that a bit weird @xingyaoww - so i might intend to just cache a single item in list, but am now caching the entire list -> this would be unexpected behaviour imo my preference here (o...

View on GitHub

xingyaoww created a comment on an issue on BerriAI/litellm
@krrishdholakia yep! what i did in #6425 to check for any cache_control inside content, if so, remove them inside and move it to message level 😓

View on GitHub

codecov[bot] created a comment on a pull request on BerriAI/litellm
## [Codecov](https://app.codecov.io/gh/BerriAI/litellm/pull/6433?dropdown=coverage&src=pr&el=h1&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=BerriAI) ...

View on GitHub

krrishdholakia created a comment on an issue on BerriAI/litellm
so then the correct input message would be ``` { "content": [ { "type": "text", "text": 'OBSERVATION:\nOn branch master\...

View on GitHub

xingyaoww created a comment on an issue on BerriAI/litellm
Yeah - i contacted anthropic yesterday, and it seems you need to do it on `message.cache_control` for it to work 😓

View on GitHub

krrishdholakia created a comment on an issue on BerriAI/litellm
``` E litellm.exceptions.BadRequestError: litellm.BadRequestError: AnthropicException - {"type":"error","error":{"type":"invalid_request_error","message":"messages.2....

View on GitHub

KirillSuhodolov starred BerriAI/litellm
ishaan-jaff created a comment on an issue on BerriAI/litellm
thanks so much for the help on this @tyler-liner, the graph is really helpful > So, I removed all threading-related code and compared the memory usage of the application. (all threading.Thread(...

View on GitHub

tkg61 created a comment on an issue on BerriAI/litellm
It is **after** i click the green submit button do i get the above error. What is the route it should take after going to FQDN.com/fallback/login, is it FQDN.com/ui or something else?

View on GitHub

xingyaoww created a comment on an issue on BerriAI/litellm
> would that be cached? anthropic's docs say the cached content has to be large enough yes - in my local branch, i got the cache write token to be the same as input token - this is the correct case

View on GitHub

xingyaoww created a comment on an issue on BerriAI/litellm
another thing is that, with the fix here: https://github.com/BerriAI/litellm/commit/8f37b2266717e5cfaafbac6a9c2a2bdc293bf01b#diff-39721500d7df0dd4c6aa7d60d7ef78a38afaf136f182114d15f4e3c1e863a413 ...

View on GitHub

krrishdholakia created a comment on an issue on BerriAI/litellm
would that be cached? anthropic's docs say the cached content has to be large enough oh - i think i'll update my test to catch that it's being sent correctly

View on GitHub

xingyaoww created a comment on an issue on BerriAI/litellm
i found that it can get "silently omitted" without carefully a fix -- you can check the cached write/read token from the response, if both are zero, the it doesn't work :(

View on GitHub

xingyaoww created a comment on an issue on BerriAI/litellm
@krrishdholakia this line: https://github.com/BerriAI/litellm/commit/8f37b2266717e5cfaafbac6a9c2a2bdc293bf01b#diff-6b37b401056b3fb9e91a7fbe1631be6f14440c26756a87b39157fcb00562de39R523

View on GitHub

krrishdholakia created a comment on an issue on BerriAI/litellm
oh it works for me

View on GitHub

Load more