See how we'd build this for you Free 30-min strategy call

PEER REVIEW INSIDE CLAUDE

PhD-level peer review on your paper, inside Claude Desktop. Two outputs from one flow, a review you can defend against a committee and a revised draft mapped to the comments.

You drop a draft into Claude Desktop. Claude hands the paper to Refine, which runs heavy parallel compute across hundreds of frontier LLM calls, examining your draft the way a senior academic in your field would. You get a structured review back with per-paragraph comments and an overall critique. Then Claude reads that review and rewrites your paper against it, mapping each change to the specific comment it addresses.

Two outputs from one flow. A review you can defend against a committee, and a revised draft you can send to coauthors the same afternoon.

This guide walks through the full setup end to end, including the four prompts you will paste into Claude, the Terminal commands you will run, and the long revision prompt that turns the review into real edits. Once set up, the whole flow takes about forty minutes of wall clock time, with roughly ten minutes of hands-on work. The rest is wait time while Refine reads your paper.

Before you start

What you need before starting

A Refine account. MCP access is on by default for every paid plan. The cheapest entry point is the Starter subscription at $40 per month, or a one-time pay-as-you-go pack starting at $49.99. A fresh free account also comes with one preview credit that works through the same MCP flow if you want to test the whole setup before paying.

Claude Desktop installed and signed in to your Anthropic account. If you do not have it, download it from claude.ai and install before continuing.

Your paper draft on disk as a .md, .docx, .pdf, or .txt file. Refine handles up to 70,000 words or 50 MB per submission, which covers most journal papers, grant applications, and single thesis chapters. For longer work, see the "How to adapt this" section at the end.

A Terminal window. On macOS this is at Applications, Utilities, Terminal. On Windows this is the Command Prompt or PowerShell. On Linux you already know where yours is. You will run three short commands in Terminal across the flow. Copy-paste only. No scripting.

Step 1

Sign up for Refine and confirm MCP access

Go to refine.ink and create an account. Any paid plan exposes the MCP server. The cheapest path is the Starter subscription at $40 per month, or a one-time pay-as-you-go pack starting at $49.99. Refine distinguishes between preview credits (lighter, faster review, one free with every new account) and full credits (the complete review this guide walks through). Step 5 below uses a full credit by default. Preview credits work on the same flow if you want to try it once for free or iterate cheaply on short drafts.

MCP access is included by default on every paid account. There is no toggle or dashboard setting to flip. Once your account is paid (or once you still have your free preview credit), the MCP endpoint accepts the OAuth handshake Claude Desktop initiates in the next step. If the OAuth step fails later on, all credits (preview and full) have most likely been spent. Check your credit balance in the Refine dashboard.

Keep the browser tab open. You will sign back into Refine in Step 2 during the OAuth flow.

Step 2

Connect Refine inside Claude Desktop

Open Claude Desktop. Click on your profile icon in the lower left corner to open the sidebar menu and select Settings.

Inside Settings, find the tab labeled Connectors in the left sidebar. This is where Claude Desktop lists every Model Context Protocol server it knows about. You will see any connectors you have already installed, each shown as a card with a name, a short description, and a toggle indicating whether the connector is active. If your Claude Desktop build shows a banner saying Connectors have moved to a Customize page, follow that prompt and look for Connectors under Customize. The flow below is the same.

Click the button labeled Add custom connector at the bottom of the Connectors list. A small form opens asking for the connector name and the server URL.

For the name, type Refine. For the server URL, paste https://api.refine.ink/mcp. Click Save.

Claude Desktop will now attempt to connect to Refine's MCP server. Because Refine's MCP requires authenticated access, Claude Desktop will open an OAuth flow in a small popup window. Sign in with the Refine account you created in Step 1. Refine will show you the list of permissions the MCP server is requesting. Review them and approve.

When the OAuth window closes, return to the Connectors tab. The Refine card should now show a green status indicator and the word Connected. If instead the card shows an error, the most common causes are either that the account you signed in with does not have MCP access (check your tier in the Refine dashboard) or that the OAuth popup was blocked by your browser (try again with popups allowed).

Once Refine shows as Connected, Claude has access to roughly twenty tools under the Refine namespace, covering document upload, review processing, and history retrieval. You do not need to call these tools directly. Claude picks the right one based on what you ask it to do.

Step 3

Upload your paper

Open a fresh Claude Desktop chat. Do not reuse an old chat, because clutter in the chat history confuses the tool-call reasoning.

Copy this prompt exactly, replace the file path with the real path to your paper on disk, and paste it into Claude:

Paste into Claude Desktop
I want to run my paper through Refine for a full peer review.
The file is on my disk at: /Users/your-name/Desktop/my-paper.md

Please do the following, in order:
1. Call upload_document with that file path.
2. Return the curl command Refine gives back, clearly formatted, so I can run it in Terminal.
3. After I confirm the curl ran successfully and paste the task_id, track the conversion event stream so we know when the document is ready to review.

Claude will call the upload_document tool. Because document uploads are binary file transfers and the MCP protocol is not built for those, Refine returns a curl command for you to run locally. Claude shows you the command in a code block in the chat. It will look similar to this, with your own values in place of the placeholders:

Run in Terminal
curl -X POST "https://api.refine.ink/documents/upload" \
  -H "Authorization: Bearer <your-24h-token>" \
  -F "file=@/Users/your-name/Desktop/my-paper.md"

The Bearer token Claude includes is scoped to your Refine account and valid for 24 hours. You do not need to save it anywhere. If the token expires before you finish the flow, run the original prompt again and Claude will issue a new one.

To run the curl command, open Terminal. Click into the Terminal window so your keyboard focus is there. Paste the full curl command exactly as Claude returned it, including the backslashes and line breaks. Press Enter.

Within a second or two, Terminal will print a JSON response on the next line. It looks like this:

Terminal output
{"task_id":"34ab67f6-6792-453f-bc84-12b6613f8449","filename":"my-paper.md","format":"MD"}

The task_id is the handle you use to track the conversion in the next step. Select and copy the full string between the quotes. Leave Terminal open.

Step 4

Watch the conversion finish

Refine converts every uploaded document into its internal review format before the review engine can read it. This takes between ten and thirty seconds depending on paper length. You track the conversion with a second curl command.

Paste this into Terminal, replacing {task_id} with the task_id you copied from Step 3 and {your_token} with the Bearer token Claude returned in Step 3:

Run in Terminal
curl "https://api.refine.ink/documents/upload/events/{task_id}?token={your_token}"

The token goes in the query string here because Terminal is opening a Server-Sent Events stream, and SSE streams cannot set Authorization headers the same way a standard POST can.

Press Enter. Terminal will start printing events, one per line, as Refine works through the conversion:

Terminal output
event: queued
event: uploading_original
event: converting
event: uploading_markdown
event: complete

When you see event: complete, the line immediately after it contains a JSON payload with four fields. The one you need is document_id. It looks like this:

Terminal output
data: {"document_id":"6780cd11-2a87-467e-bdf7-3864ac70a7f2","word_count":3787,"content_url":"/documents/6780cd11-2a87-467e-bdf7-3864ac70a7f2/content","original_url":"/documents/6780cd11-2a87-467e-bdf7-3864ac70a7f2/original"}

Copy the document_id value. Terminal may keep the stream open for a few seconds after complete. Press Control-C to close it.

Want us to set this up inside your research workflow? Book a call.

---
MonTueWedThuFriSatSun
Book a 30-minute walkthrough

30-minute walkthrough · Google Meet · Free

Step 5

Kick off the review

Switch back to Claude Desktop. Paste this prompt, replacing {document_id} with the value you copied from Step 4:

Paste into Claude Desktop
Kick off a full review on document {document_id}. Use
documents_processDocument with preview set to false.
When Refine accepts the job, return the history_id and the
expected completion time so I can track it.

Claude calls documents_processDocument. Refine queues the review and returns a history_id. Claude displays it in the chat. It looks like this:

Claude output
history_id: 5a1fe6e1-e8ed-484b-b8bf-53f7e31261e5

Copy the history_id. At this point, one paid review credit has been consumed from your Refine account. The review is running in the background on Refine's servers. Expected completion is between five and forty minutes depending on paper length. Short conference abstracts come back in five. Full journal papers with heavy references take closer to forty.

You can close Claude Desktop or switch tasks while Refine is working. The review persists in your Refine history regardless of whether Claude is open.

Step 6

Track the review (optional)

If you want to watch Refine work in real time, paste this command into Terminal, replacing both the document_id and the history_id with your values, and reusing the same Bearer token:

Run in Terminal
curl "https://api.refine.ink/documents/{document_id}/process/events/{history_id}?token={your_token}"

You will see a progress stream. Each event is one structural step Refine is working through (claim extraction, literature cross-check, methodology scoring, writing quality pass, synthesis). When the stream ends, your review is ready.

If you do not care to watch, skip this step. Move on to Step 7 after about fifteen to twenty minutes have passed since you kicked off the review in Step 5.

Step 7

Read the review

Back in Claude Desktop, open a new chat or continue in the same one. Paste:

Paste into Claude Desktop
Pull the most recent Refine review from my history.
Show me everything:
- the overall feedback summary
- every individual comment, with the paragraph or section
  of the paper it refers to
- any scores or structured ratings Refine assigned
- any references or citations Refine flagged

Claude calls history_getHistorySession and returns the full review. You get a structured document with two layers.

The first layer is an overall critique, two to four paragraphs covering the paper's central argument, the strength of its evidence, its placement within the existing literature, and its most significant weakness.

The second layer is per-comment feedback. Each comment is anchored to a specific passage of your paper. The comment names one concrete issue, explains why it weakens the argument at that point, and suggests a direction for the fix. Issues Refine surfaces most often include overclaims the evidence does not fully support, methodological gaps a senior reviewer would flag, missing citations where your field expects them, internal contradictions between sections, and structural concerns about the order of arguments. Refine typically returns between ten and forty of these per paper.

Step 8

Let Claude rewrite your paper against the review

This is the step that pays off. Refine's review lives in your Claude chat context now. Your paper is also accessible. Claude can read both, hold them side by side, and produce a revised draft that addresses every actionable item in the review, tagged so you can see which change came from which comment.

You run this with one long prompt. The prompt is engineered carefully because the quality of the rewrite depends on how tightly the task is constrained. Copy it exactly. Paste it after you have the review loaded in your chat from Step 7, along with the full paper text in the same chat or attached as a file.

Long prompt

ACADEMIC_REVISION_ENGINE

~160 lines · paste into Claude Desktop
## PROMPT: ACADEMIC_REVISION_ENGINE

### Role assignment
You are Revision_Engine, a senior research editor with fifteen
years of experience turning drafted papers into publication-ready
manuscripts. You have worked across disciplines from molecular
biology to economic theory. You read papers the way a sharp editor
at a top field journal reads them: looking for overclaims, unsupported
causal language, missing counterevidence, gaps between claims and
the citations attached to them, and structural issues that make the
argument harder to follow than it needs to be.

Your personality is that of an editor who respects the author,
understands the argument, and wants the paper to be the strongest
possible version of itself. You rewrite surgically. You preserve
voice, argument, and contribution. You do not inject your own views
where the review is silent. You do not soften claims the review
did not ask to be softened.

Your output is a revised draft of the paper, plus a detailed change
log mapping every edit to the specific review comment that prompted
it.

### Core directive

You optimize for one outcome: a revised draft that addresses every
actionable item in the Refine review without distorting the author's
voice, argument, or contribution.

Cost of failure: if the revised draft reads like a different paper
written by a different person, the author will reject it and rewrite
from scratch, wasting the review pass. If review comments are
silently dropped, the author cannot defend the revision in front of
coauthors or reviewers. Both failure modes are worse than returning
a draft that acknowledges a comment as unaddressable with reasoning.

Decision framework when choosing between conflicting priorities:
1. Preserve the author's central argument and contribution above
   everything else. If a review comment would require abandoning
   the paper's main claim, flag it as a judgment call. Do not
   execute it silently.
2. Preserve the author's voice second. Technical vocabulary,
   sentence rhythm, and phrasing all stay. Rewrites match the
   surrounding prose.
3. Address review comments third. Every comment gets handled.
   Handling means either executing the comment or rejecting it
   with reasoning.

You will NOT: change the paper's thesis, add citations you cannot
verify from the paper's existing reference list, delete paragraphs
the review did not mark for deletion, introduce new arguments the
author did not make, or flatten technical language into generic prose.

### Input data specification

**Input 1: [PAPER_DRAFT]**
- Format: The full text of the paper, either pasted into the chat
  or attached as a file.
- Quality standard: Must be the same version of the paper that was
  submitted to Refine. If the author has made edits since
  submission, those edits must be merged in before running this
  prompt, otherwise the review comments will reference passages
  that no longer exist.

**Input 2: [REFINE_REVIEW]**
- Format: The output from Step 7 of this guide, including the
  overall feedback, every individual comment with its anchor
  location, and the scores Refine assigned.
- Quality standard: The review must be complete. If any comment
  says "see section 3" and section 3 is not in the review context,
  stop and ask the author to reload the review.

**Input 3: [AUTHOR_PRIORITIES]** (optional)
- Format: A short note from the author indicating which review
  comments to prioritize, which to address lightly, and which to
  reject outright. If absent, treat every comment as priority.
- Example: "Address all methodology comments in full. Address
  literature-placement comments lightly, as I will do a second
  pass on framing myself. Reject comment 14, which I disagree with
  on substantive grounds."

### Methodology: The three-pass revision framework

**Pass 1: Map every review comment to specific passages.**

For each comment in the Refine review, identify the passage in the
paper draft it anchors to. Quote the original passage verbatim in
your internal working memory. If a comment anchors to a location
you cannot find in the draft, flag it as a location-mismatch error
and include it in the open questions section of the output.

**Pass 2: Classify each comment.**

Each review comment gets one of three classifications:

- Must-address: the comment identifies a concrete issue that
  weakens the paper if unaddressed. Examples: overclaim the paper
  makes, missing counterevidence the reader would expect,
  internally inconsistent claim, citation that does not support
  the claim attached to it. Default to must-address unless the
  comment is clearly subjective or reflects reviewer preference
  without flagging a concrete issue in the paper.

- Judgment-call: the comment is a valid observation but the right
  response depends on the author's priorities or on information
  you do not have. Examples: structural suggestion that would
  require reorganizing a section, framing comment that would shift
  the paper's emphasis, suggestion to add discussion of a paper
  not in the current reference list. Surface these to the author
  with your recommended response and your reasoning, so the author
  can decide.

- Reject: the comment is based on a misreading of the paper,
  conflicts with a different comment in the same review, or asks
  for something outside the scope of this paper. Justify the
  rejection in one to three sentences. If you cannot justify
  rejection confidently, default to judgment-call.

**Pass 3: Execute.**

For every must-address comment, rewrite the relevant passage so
the issue is handled. The rewrite preserves voice, technical
vocabulary, and sentence rhythm. If the rewrite requires adding a
citation the reference list does not contain, insert a placeholder
in the form `[CITATION NEEDED: <topic>]` and note it in the open
questions section.

For every judgment-call comment, produce a recommended rewrite as
if it were must-address, but mark it clearly in the change log as
pending author approval. The author can accept, reject, or modify
each in the second pass.

For every rejected comment, explain the rejection in the change
log. Do not modify the paper for rejected comments.

### Task specification

**Objective:** Produce a revised draft of the paper that addresses
every actionable review comment, preserves the author's voice and
argument, and surfaces judgment calls and open questions for the
author's final sign-off.

**Constraints:**
- The revised draft must contain the complete paper with every
  section appearing in full.
- Voice preservation is binary. If the author writes in the first
  person plural, so does the revised draft. If the author uses
  hedged language in a specific way, the revision matches it.
- The change log must list every change. Silent edits are not
  permitted.
- The total length of the revised draft should be within five
  percent of the original, plus or minus. If the review comments
  collectively demand more expansion or compression than that,
  flag it in the open questions.
- The output is plain text or markdown. No formatting that the
  author would have to strip out before sending to coauthors.

### Output specification

Produce five sections, in this order:

**Section 1: Revised draft**
The complete revised paper, formatted identically to the input
draft.

**Section 2: Change log**
A table or list with one entry per change. Each entry contains:
- Change number
- Review comment it addresses (number and short summary)
- Location in the paper (section heading or paragraph reference)
- What changed (short summary of the edit)
- Classification (must-address or judgment-call)

**Section 3: Judgment calls for author**
Every judgment-call entry from the change log, restated with your
recommended response and your reasoning in one to three sentences.
The author uses this section to accept, reject, or modify each
judgment call in a second pass.

**Section 4: Rejected comments**
Every rejected review comment, restated with your reasoning in one
to three sentences.

**Section 5: Open questions**
Anything you could not resolve. This includes citation placeholders,
location-mismatch errors, review comments that depend on information
outside the paper, and any conflict between review comments.

### Quality gate

The revised draft fails and must be regenerated if any of the
following are true:

- The author's central argument or thesis has changed. Revert.
- Any must-address comment has no corresponding change log entry.
  Loop back to the missing comment.
- The change log contains entries where "what changed" is vague
  enough that the author cannot tell what was actually done.
  Rewrite the affected entries with specifics.
- The revised draft introduces factual claims not present in the
  original and not supported by the review. Remove or mark as
  [CITATION NEEDED: <topic>].
- The voice of the revised draft is noticeably different from the
  original in any section. Rewrite the affected section matching
  the original voice.
- Fewer than ninety percent of the review's individual comments
  appear in the change log. Address the missing comments.

Run every quality gate before returning the output.

Paste the prompt. Then, in the same chat, make sure the original paper text is available to Claude. Either paste the full paper body after the prompt, or attach the paper file. If you attach it, Claude reads it as context. If you paste it, Claude holds it in chat memory directly. Both work.

Claude produces a revised draft, a change log, and the three support sections described in the prompt's Output Specification. Read the change log first, then skim the judgment calls, then read the revised draft end to end. Then reply in the same chat and tell Claude which changes to keep, which to drop, and how to modify any judgment calls. Claude returns the final version with those edits applied.

You now have a paper that has been reviewed by Refine's peer-review AI and revised by Claude against that review, with every change traceable back to a specific comment. Send it to coauthors. Or run it through Refine a second time for a fresh review on the revised draft, if the changes were substantial.

Edge cases

How to adapt this

For shorter papers under 1,500 words. Use a preview credit in Step 5 in place of a full credit. Change the prompt in Step 5 from preview set to false to preview set to true. The preview review returns three to five high-level comments covering the paper's overall shape, without per-paragraph annotations. Preview credits cost less, which matters when iterating on an abstract, a grant paragraph, or a single conference-submission draft. The rest of the flow, including Step 8, works the same way with the shorter review.

For very long papers above 20,000 words. Refine accepts up to 70,000 words per submission, which is long enough for a full journal paper or a single thesis chapter. For a full-length thesis or book, split the document into logical chapters, run each through the flow separately, and then run a synthesis pass by pasting all the reviews into a fresh Claude chat and asking Claude to identify cross-chapter patterns. The flow scales well because the expensive step, the review itself, happens on Refine's side, and the expensive context, the paper text, only loads into Claude for the revision step.

For non-paper work. Grant applications, lit reviews, and thesis defense prep documents all work with the same flow. The Step 8 prompt still applies unchanged. The only tweak is in the prompt's Input 1 standard, where you can note the document type so Claude preserves the right conventions in the revised draft.

THE FULL SYSTEM

We build research AI workflows end to end

If you want us to set up this review-and-revision flow inside your team's research workflow, including the MCP configuration, the Claude Desktop workspace, and the prompt library tuned to your field, book a call.

---
MonTueWedThuFriSatSun
Book a 30-minute walkthrough

30-minute walkthrough · Google Meet · Free