20250722-Textual_v4.0.0_The_Streaming_Release

原文摘要

Textual v4.0.0: The Streaming Release

Will McGugan may no longer be running a commercial company around Textual, but that hasn't stopped his progress on the open source project.

He recently released v4 of his Python framework for building TUI command-line apps, and the signature feature is streaming Markdown support - super relevant in our current age of LLMs, most of which default to outputting a stream of Markdown via their APIs.

I took an example from one of his tests, spliced in my async LLM Python library and got some help from o3 to turn it into a streaming script for talking to models, which can be run like this:

uv run http://tools.simonwillison.net/python/streaming_textual_markdown.py \
'Markdown headers and tables comparing pelicans and wolves' \
-m gpt-4.1-mini

Running that prompt streams a Markdown table to my console.

Tags: async, python, markdown, ai, will-mcgugan, generative-ai, llms, textual, llm, uv

[原文链接](https://simonwillison.net/2025/Jul/22/textual-v4/#atom-everything)

进一步信息揣测

- **Textual的商业化困境**:Will McGugan可能不再运营围绕Textual的商业公司,但仍在积极推进开源项目,暗示开源工具的商业化路径可能存在挑战,或开发者更倾向于保持项目纯粹性。 - **Markdown流式支持的实用价值**:v4.0.0的Markdown流式功能直接针对当前LLM API的默认输出格式(流式Markdown),说明框架设计紧跟技术趋势,但未公开提及的是,此类功能可能源于实际开发中与LLM集成时的频繁需求或痛点。 - **社区协作的隐性作用**:作者通过私下求助(“got some help from o3”)快速实现脚本,反映技术社区中非公开的专家网络(如Slack/Discord群组)对解决复杂问题的关键性,这类资源通常不公开但能显著提升效率。 - **未文档化的工具链技巧**:示例中使用`uv run`直接执行远程脚本(而非本地克隆),可能是Python工具链的高效用法,但官方文档中较少强调此类实践,需依赖经验或内部交流。 - **LLM模型选择的隐藏知识**:脚本中指定`-m gpt-4.1-mini`,可能指向未广泛宣传的轻量级GPT-4变体,这类模型性价比信息通常通过开发者圈子口口相传,而非公开推广。 - **测试用例的复用价值**:作者直接复用Textual的测试代码片段(如`test_snapshots.py`中的示例),暗示开源项目的测试用例可能是快速上手的实用参考,但这一用途很少被正式提及。 - **异步库的隐性优势**:使用async LLM库(如`llm.datasette.io`)而非同步请求,可能源于实际性能瓶颈的教训(如流式响应延迟),但文章未明确说明异步在此场景的必要性。