web_literature_mining
community[skill]
Scientific Literature Mining - Mine scientific literature: PubMed search, arXiv search, web search, and Tavily deep search. Use this skill for scientific informatics tasks involving pubmed search search literature search web tavily search. Combines 4 tools from 2 SCP server(s).
$
/plugin install InnoClawdetails
Scientific Literature Mining
Discipline: Scientific Informatics | Tools Used: 4 | Servers: 2
Description
Mine scientific literature: PubMed search, arXiv search, web search, and Tavily deep search.
Tools Used
pubmed_searchfromsearch-server(streamable-http) -https://scp.intern-ai.org.cn/api/v1/mcp/7/Origene-Searchsearch_literaturefromserver-1(sse) -https://scp.intern-ai.org.cn/api/v1/mcp/1/VenusFactorysearch_webfromserver-1(sse) -https://scp.intern-ai.org.cn/api/v1/mcp/1/VenusFactorytavily_searchfromsearch-server(streamable-http) -https://scp.intern-ai.org.cn/api/v1/mcp/7/Origene-Search
Workflow
- Search PubMed
- Search arXiv
- Web search
- Tavily deep search
Test Case
Input
{
"query": "CRISPR Cas9 gene therapy 2024"
}
Expected Steps
- Search PubMed
- Search arXiv
- Web search
- Tavily deep search
Usage Example
Note: Replace
sk-b04409a1-b32b-4511-9aeb-22980abdc05cwith your own SCP Hub API Key. You can obtain one from the SCP Platform.
import asyncio
import json
from contextlib import AsyncExitStack
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
from mcp.client.sse import sse_client
SERVERS = {
"search-server": "https://scp.intern-ai.org.cn/api/v1/mcp/7/Origene-Search",
"server-1": "https://scp.intern-ai.org.cn/api/v1/mcp/1/VenusFactory"
}
async def connect(url, stack):
transport = streamablehttp_client(url=url, headers={"SCP-HUB-API-KEY": "sk-b04409a1-b32b-4511-9aeb-22980abdc05c"})
read, write, _ = await stack.enter_async_context(transport)
ctx = ClientSession(read, write)
session = await stack.enter_async_context(ctx)
await session.initialize()
return session
def parse(result):
try:
if hasattr(result, 'content') and result.content:
c = result.content[0]
if hasattr(c, 'text'):
try: return json.loads(c.text)
except: return c.text
return str(result)
except: return str(result)
async def main():
async with AsyncExitStack() as stack:
# Connect to required servers
sessions = {}
sessions["search-server"] = await connect("https://scp.intern-ai.org.cn/api/v1/mcp/7/Origene-Search", stack)
sessions["server-1"] = await connect("https://scp.intern-ai.org.cn/api/v1/mcp/1/VenusFactory", stack)
# Execute workflow steps
# Step 1: Search PubMed
result_1 = await sessions["search-server"].call_tool("pubmed_search", arguments={})
data_1 = parse(result_1)
print(f"Step 1 result: {json.dumps(data_1, indent=2, ensure_ascii=False)[:500]}")
# Step 2: Search arXiv
result_2 = await sessions["server-1"].call_tool("search_literature", arguments={})
data_2 = parse(result_2)
print(f"Step 2 result: {json.dumps(data_2, indent=2, ensure_ascii=False)[:500]}")
# Step 3: Web search
result_3 = await sessions["server-1"].call_tool("search_web", arguments={})
data_3 = parse(result_3)
print(f"Step 3 result: {json.dumps(data_3, indent=2, ensure_ascii=False)[:500]}")
# Step 4: Tavily deep search
result_4 = await sessions["search-server"].call_tool("tavily_search", arguments={})
data_4 = parse(result_4)
print(f"Step 4 result: {json.dumps(data_4, indent=2, ensure_ascii=False)[:500]}")
# Cleanup
print("Workflow complete!")
if __name__ == "__main__":
asyncio.run(main())
technical
- github
- SpectrAI-Initiative/InnoClaw
- stars
- 374
- license
- Apache-2.0
- contributors
- 16
- last commit
- 2026-04-20T01:27:21Z
- file
- .claude/skills/web_literature_mining/SKILL.md