In today's technological explosion, where will the development of medical care and AI go? | DrPika's Blog

Introduction

This article truly showcases the entire process of clinical doctors evolving from skepticism to dependence on AI. As for the future, based on my understanding, AI will become increasingly powerful.

Here, I would like to thank @pika for sharing. He is an excellent radiologist who provided a lot of help when I was learning radiomics. Everyone can join the first domestic radiomics discussion community he runs for discussion and learning.

Original Text

This article was converted by SimpRead, original URL blog.drpika.com

The idea for this topic suddenly interested me, as a blogger who updates once a year, while I was working late at the hospital at night, finishing reports for physical examination CT screenings. My mouse cursor was clicking through AI pre-screened suspicious lung nodules one by one, deciding whether to ignore them or include them in the report.

These ground-glass nodules, less than 5 mm in diameter, required me at least 10 minutes five years ago to repeatedly review hundreds of CT slices, perform multiple measurements, three-dimensional target reconstruction, or pull up medical records for comparison before drawing conclusions. Now, with algorithms marking them as carcinoma in situ with 99.5% confidence — after relying on imaging AI for six years, many doctors no longer have the confidence to detect such inconspicuous lesions with their own eyes among hundreds of images.

I recalled the black server wrapped in dust-proof film that entered the department at the end of 2019. When it started, its fans sounded like a helicopter landing outside the window. We joked it was a “film-copying machine,” but who would have thought these noisy metal boxes were quietly rewriting the underlying logic of medical imaging.

When the phrase “AI-assisted diagnostic results are for reference only” first appeared below the physician’s signature, did we ever wonder whose name should be on this diagnostic report that could change a patient’s fate?

We are fortunate to be born in the era of a technological explosion, becoming doctors who work closely with AI, witnesses to a revolution in medical technology, accepting AI assistance, and even being surpassed by it.

1. Preface

When I just joined the hospital in 2019, the work pressure was not heavy. Everyone chatted and read scans together in a cramped old office, and life was still relaxed and pleasant.

Once, I accompanied the director to attend a lung cancer surgery consultation where we saw a case: the patient was diagnosed with lung squamous cell carcinoma after admission but already with bone metastases, losing the opportunity for surgery. Paradoxically, the patient had a CT physical checkup at our hospital six months earlier, and no abnormalities were found in the report. So I reviewed the patient’s CT images from a few months before — indeed, no lesion was detected. Can squamous cell carcinoma progress so fast, growing and metastasizing within months?

The director, true to his role, reviewed the CT images multiple times, pointed at the edge of the hilum bronchus with his mouse, and said in hindsight, there was a slight bulge, the lesion was right there. Unfortunately, the slice thickness was 10mm; if it were thin slices, the tumor contours could be seen.

The next day, under the director’s arrangement, the physical examination center immediately changed the lung CT slice thickness from 10mm to 2mm. From a completely fair and objective standpoint, this inconspicuous measure, combined with the hospital’s large screening population base, single-handedly reduced the lung cancer incidence and mortality in the entire region.

However, changing from 10mm to 2mm slices immediately increased workload fivefold for us. Instead of reviewing 100 CT slices, now we had to review 500. Colleagues in the department felt exhausted and some wanted to revert to the original approach.

The director asked for my opinion. I said thin-slice images were a must, but could we try imaging AI? I saw several domestic products were doing quite well.

2. Top Performer in Medical AI Application: Imaging AI

Therefore, at the end of 2019, after extensive contact, the first server equipped with an AI lung nodule screening system was installed in our department after signing an agreement.

Converting massive imaging data into binary format for high-throughput processing, using deep learning technology and trained inference models to identify, label, segment, and predict diagnosis of corresponding diseases is one of the hottest application research directions in artificial intelligence today.

Back then, when it took the lead starting from medical imaging diagnosis at our hospital, clinicians didn’t even have a copy-paste function in electronic medical records.

Honestly, the AI then acted like a highly gifted but still untrained apprentice. For a set of chest CTs, it would enthusiastically mark hundreds of suspicious nodules within five minutes, but the vast majority were just cross sections of blood vessels, old scars, or imaging artifacts that had to be manually excluded one by one.

In this field, artificial intelligence can be literally read as artificial, intelligence: the more artificial work, the more intelligence.

Some colleagues complained this arbitrarily increased the workload, but amidst the numerous false positives, huge potential was hidden.

If current imaging AI diagnosis lacks accuracy and speed, it must be because we haven’t fed the AI model enough data or the annotation quality isn’t high enough.

So I established contact with the development company to regularly send anonymized images of cases that were missed or misidentified based on feedback from our clinicians, allowing them to improve the model. In return, their latest optimized models would be deployed in our system immediately.

3. One Box Revolutionizing Decades of Work Habits

By the end of 2020, after nearly over a hundred rounds of unified feedback and three major version updates, colleagues’ work habits had unconsciously changed from initial skepticism and rejection to high dependence. AI pre-screening has become an indispensable first perspective in our image reading process.

Routine image reading by radiologists requires layer-by-layer examination of every organ and tissue structure in the image, judging if morphology and gray scale are within normal range. This process is like a “spot the difference” game, but the difference is we do not know how many abnormalities exist or if the answers we find are correct.

This process is time-consuming and labor-intensive and demands continuous high concentration from physicians, one of the reasons why radiology departments are larger than others. Ten minutes of focus per patient’s image might be manageable, but when a doctor faces nearly a hundred patients and tens of thousands of image slices daily, this mechanical and tedious repetitive labor, combined with misdiagnosis or missed diagnosis risk caused by carelessness, puts enormous pressure on every conscientious doctor, making late-night overtime a norm.

Imaging AI involvement fundamentally changes this dilemma. It can accurately locate lesions on several images out of hundreds of slices and clearly mark the lesion areas with boxes. Doctors can focus their screening efficiently — it’s like playing “spot the difference” with answers in hand, greatly improving work efficiency and diagnostic confidence.

“Currently, the annual growth rate of medical imaging data in China is about 30%, while the growth rate of doctors is only 4.1%. That is, the growth rate of radiologists is one-tenth of the growth rate of patient demand.”

— On May 30, 2019, at the 13th Annual Conference of Radiology Physicians of the Chinese Medical Doctor Association, Liu Shiyuan, the chairman of the Radiology Branch of the Chinese Medical Association, stated that China urgently needs to fill the gap in radiologists.

Wang Jianhua, a radiologist at Ningbo University Affiliated Hospital, once statistics radiologists’ workload: 80–100 CT reports daily, 60–80 MRI reports, or 120–150 ultrasound sites. Even spending 7-8 minutes per report requires 10 hours to complete all. For difficult cases, 15 hours are not enough.

Imaging AI’s true coming-of-age in our department was soon. After the COVID policy shift, staff was reduced due to illness, and those remaining faced enormous pressure on the frontlines. Meanwhile, the hospital decided to offer free lung CT screenings for over 5,000 staff members.

Facing a mountain of work pressure, AI played a decisive role.

For the first time, our department completely relied on AI to produce diagnostic reports. Radiologists only reviewed suspicious pneumonia, mediastinal lesions, etc. Lesions marked by AI, including nodules, fibrosis, pleural lesions, rib fractures, etc., were, in principle, no longer reviewed again. Thus, 5,000 cases, two doctors, three days — basically cleared.

Later, over 20 employees were found with suspicious lung nodules flagged by AI, sent to lung cancer surgery for consultation, evaluation, and surgery, with postoperative pathology confirming early cancer. The 100% diagnostic accuracy became a praised story.

At that time, we had officially purchased this imaging AI system through bidding, and the manufacturer obtained the NMPA Class III certification.

Imaging AI has proven itself as a core productivity force truly empowering healthcare.

4. The All-Competent Imaging AI, An Enabler

Automatic detection of lung nodules, pneumonia, and fractures are just the earliest and most mature imaging AI solutions. As the market gradually develops and doctors increasingly recognize AI solutions, more and more applications have emerged. There’s nothing AI can’t do, only things doctors can’t think of.

Later, even colleagues and leaders tasted the benefits, and we introduced dozens of different types of imaging AI over recent years, covering imaging diagnosis, post-processing, disease assessment, preoperative surgical analysis, film layout, and printing.

From then on, we proudly say our department has the highest intelligent level among hundreds of departments in the entire hospital.

Besides, imaging AI has platform capabilities, benefiting not only the radiology department but also providing an imaging intelligence platform that clinical departments across the hospital can share to improve diagnosis and treatment efficiency.

  • Empowering Radiology Department

    We used to need about seven to eight dedicated positions to handle daily imaging post-processing, film layout, and printing…

    Since imaging AI can automatically reconstruct, lay out, and print with one click, long-standing issues such as inconsistent film printing standards and quality, untimely printing, and excessive manpower consumption have all been solved by AI.

    Look at the beautifully and neatly arranged post-processing and layout images. Doctors need to do only two things: ① Make sure the reconstructed images look fine; ② Click “Print” and go!

    As a result, our dedicated staff shrank from seven to eight people down to just two — and this AI-assisted role is currently known as the lightest shift, everyone competes to take it.

  • Empowering Stroke Center

    Previously, stroke centers admitted patients with unknown onset times, routinely completing CT plain scans, arterial CTA, and perfusion imaging in one-stop exams, used to distinguish cerebral hemorrhage/stroke, identify responsible vessels and stenosis levels, and evaluate ischemic penumbra for thrombolysis/thrombectomy salvage;

    However, this artificial process of CT scanning, image post-processing, and preliminary diagnosis took more than 40 minutes, far exceeding the advanced stroke center’s average DNT (door-to-needle time from admission to thrombolytic treatment) requirement of within half an hour.

    The combination of head and neck vascular and cerebral perfusion AI modules and high-end CTs equipped with wide-body detectors shortened the workflow for image post-processing and initial diagnosis to 5 minutes, assisting neurology and stroke centers to establish standard acute stroke units early, dramatically improving DNT and the construction quality ranking of advanced stroke centers, enabling hospitals in fourth-tier cities to rank 38th nationwide.

  • Empowering Oncology Surgery

    A surgeon who can’t read CT images is not a good surgeon. Besides communicating with patients in clinics, surgeons spend much time independently reading images. 500-slice thin-slice CTs torment radiologists and also trouble surgeons.

    For example, some sharp-eyed thoracic surgeons noted the imaging AI website during radiology consultations, got addicted after trying, and surprisingly found they could convince patients better by showing AI results during communication — “Look, after so long of follow-up, AI flagged it as high-risk. What are you waiting for? Prepare for surgery.”

    Additionally, AI-based 3D visualization surgical navigation is also their favorite:

    • Previously, lung lobectomy was required; with navigation, only lung segmentectomy is needed;

    • Previously, visually indistinguishable extended resection ranges now can be accurately marked by vascular territories and AI identifies the responsible vessels, adhering to minimal resection principles;

    • Previously, patients had to stay 1-2 extra days for 3D models before surgery; now it only takes 5 minutes.

    I actually researched this area with patents and software copyrights. Below is an excerpt from a project proposal PPT that was tragically rejected.

  • Empowering Physical Examination Center

    Early breast cancer screening mainly relies on ultrasound and mammography, with ultrasound as the main … Some might want to debate this, saying guidelines recommend mammography first, ultrasound as a supplement, so how do I say ultrasound is the main?

    Wait, breast cancer has the highest early screening missed diagnosis rate worldwide and is a leading cause of radiologists being sued in some Western countries. Our hospital has a short history of mammography use, with a high skill threshold and insufficient talent reserves. Possibly no more than two colleagues can confidently claim to have never missed breast cancer on mammography. Hence, our mammography reports require ultrasound as well, consistent with guideline optimal recommendations.

    Since mammography diagnostic AI appeared, colleagues report with more elegance and less worry: suspicious calcifications, benign calcifications, lymph nodes, glandular hyperplasia, BI-RADS classification—all well covered by AI, no more fear in diagnosis as AI predictive confidence is quite high; mammogram reporting time has shortened to within half an hour. As the department with the largest volume in mammography screening, the efficiency of the physical exam center has risen accordingly.

  • Empowering Pediatric Care Center

    Bone age AI also holds Class III certification. Previously, parents getting bone age evaluated via wrist DR had doctors comparing with the imprecise and clinically problematic GP atlas page by page — deciding which sesamoid or phalangeal bone is slightly more or less developed to estimate bone age.

    Doctors not specialized in bone age evaluation would take 20-30 minutes per evaluation, the simplest among all methods, but with huge errors. This is not only because the GP atlas was based on Nordic populations but also due to lack of quantitative standards and heavy subjective judgement interference; different doctors or the same doctor at different times might have results differing by 1-2 years, which is common.

    A wrist DR cannot be evaluated in under half an hour, and different doctors may yield entirely different results. This is one of the most disliked report types by radiologists, who avoid bone age assessments, charging only 35 RMB (strictly limited by insurance). The brain cells needed aren’t worth the income, obviously not cost-effective.

    Now, bone age AI uses standards like RUS-CHN better suited to Chinese children’s physique, quantitatively scoring each wrist bone to calculate statistically bone age with good reproducibility and objectivity, also predicting final height based on median height and weight percentile curves. Doctors and parents who used it say it’s great.

  • Empowering Cardiovascular Medicine

    Previously, the chest pain screening method in textbooks was the treadmill exercise test, but clinically that method is outdated. If ACS is triggered on the treadmill, given the current tense medical environment and doctor-patient relationships, it could be very troublesome.

    Clinical coronary heart disease diagnosis was mainly based on coronary angiography, threading a guide wire through the radial artery to the coronary ostium, then catheterizing and releasing contrast agent to see which vessel is narrowed, fundamentally confirming coronary artery disease causing heart problems. However, this surgery is costly, time-consuming, and exposes doctors and patients to significant rays. Under DRG/DIP and consumables centralized procurement policies, coronary angiography surgery is expected to gradually shrink.

    Hence, the latest model of CT with a wide-body detector teamed with accurate coronary AI shines: CCTA requires only one CT scan, like holding a magnifying glass to look at coronary arteries:

    • AI reconstructs each branch clearly and fully, viewable horizontally, vertically, or rotating 360 degrees;
    • AI automatically diagnoses myocardial compression and quantifies stenosis degree;
    • AI assesses plaque properties (likelihood and timing of detachment);
    • AI visualizes 3D fractional flow reserve of coronary arteries (further reducing unnecessary stent surgeries);
    • All of these workflows take only 5 minutes (previously took at least an hour manually);
    • Non-invasive, accurate, and cheap.

    Medical insurance has aggressively reduced CCTA examination fees in recent years. The screening volume is expected to rise further with AI help, beneficial to the country and the public. Don’t underestimate it; it can preserve many people’s labor and social abilities in the future.

  • Establishing Imaging Research Platform

    The imaging research platform is like a martial arts master standing on one foot with the other step kick flying, left hand deep learning, right hand radiomics. Previously, our doctor friends knew the field was competitive and paper-heavy, but had to learn Python, then MATLAB, and even some R-language preliminaries, which was daunting.

    Now, our AI research platform has packaged the code, allowing doctors to complete all operations without coding by clicking buttons and drawing circles, including machine modeling, statistical analysis, and generating research conclusions with one click. It can also generate charts directly insertable into papers — isn’t that great?

The ten-plus roaring black metal server boxes in our department not only support steady staffing and stable work pressure under increasing annual workload but also empower clinical departments to improve comprehensive diagnosis, treatment efficiency, and quality. Medical AI’s most successful application field is undoubtedly imaging. Two blooms on one branch? Indeed!

According to a survey led by Prof. Liu Shiyuan, chairman of the Radiology Society of the Chinese Medical Association and from Shanghai Changzheng Hospital, a couple of years ago survey results showed that about three-quarters of tertiary hospitals nationwide have adopted imaging AI. The competition in this field is fierce, with seven or eight leading vendors such as United Imaging, Deepwise, Infervision, Shukun, and tech giants like Alibaba Damo Academy, Tencent Miying, and Huawei Medical competing and attempting to overtake.

The future looks bright~

5. Beyond Imaging, We Are Also…

After talking so much about imaging AI, doesn’t it seem like our vision is too narrow?

Medical AI applications cover drug development, public health monitoring, medical administration, quality control, intelligent guidance and consultation, physician capability assessment, multidisciplinary collaboration, personalized medicine, and many other areas involving vendors and research teams.

Even as amateurs like us, we pondered practical application scenarios when Deepseek-R1 was released, which caused a huge sensation domestically and internationally. It was just after the Spring Festival when I was urgently summoned by experts to Changsha for meetings to discuss how to seize the upper hand.

Before finishing even a draft of our thoughts, my desktop still keeps a backup:

After extensive discussions with senior experts within the hospital, Professor Duan from Central South University, and senior leaders from telecom operators, we ultimately decided to focus on implementing AI quality control for radiology diagnostic reports using a large model. We gathered a large amount of data for testing and fine-tuning, and also conducted lots of annotations for reinforcement training. After more than half a year, we have achieved some encouraging results (the latest progress is that Professor Duan from Central South University in our team has already secured a 2025 National Natural Science Foundation general grant — a cause for celebration!).

The main idea is to fine-tune a civilian-level open-source model like Deepseek, training a large model capable of automatically correcting radiology diagnostic reports using high-quality diagnostic report data and expert doctor annotations. The trained model is then deployed for inference directly within the hospital on relatively low-cost hardware, minimizing privacy leakage concerns. It can call relevant patient information and clinical indicators for comprehensive analysis of radiology diagnostic reports and provide correction suggestions.

The advantage here is that, unlike simple text correction in the past, the large model’s reasoning ability combined with more comprehensive clinical medical records and examination data holds great potential for achieving higher-level quality control. This includes correcting diagnostic biases, supplementing differential diagnoses, and even overturning some conclusions.

Of course, we are still at a relatively early stage. Integration with the PACS system to unify workflows has not yet been achieved, and many quality control correction suggestions are trivial or insignificant, requiring further tuning…

However, since OpenAI has already iterated up to GPT-5, I believe that as long as we invest effort into data work and annotations, our project should not be too difficult!

Of course, painting a big picture for leadership at the right time is also essential, otherwise, without people, funds, and leadership support, even Einstein could not have discovered relativity.

In the past year or two, the general intelligence demonstrated by AI, especially large models, has injected strong confidence into every industry; from another perspective, being able to use and make good use of large models can greatly expand one’s ability boundaries. Earlier this year, even the HR from DeepSeek (the company developing DeepSeek) came to recruit me — things are looking up.

Today, I have become a heavy user of various large models, having used the web-based Chat, Cursor, and Claude Code CLI. Every excellent development tool is a brand-new productivity tool to me, enabling someone like me who doesn’t know SQL commands to easily operate databases in Chinese, and allowing me to understand the architecture of our hospital’s PACS system in a simple way without knowing Java programming, and make changes closer to clinical applications.

These tools support the added value of my role as an information specialist doing some hard-to-replace work.

For example, with the help of ChatGPT and Codex, I wrote two small but unremarkable tools using PHP programming language and bash scripting command respectively:

Though small, these tools are essential for some colleagues in certain departments. I also gained emotional value from this — a win-win situation (except for my family who aren’t happy about my frequent overtime).

Besides daily clinical work, my most important and time-consuming work is scientific research. Although there is little I can disclose because of confidentiality, I can show you the technical roadmap of the provincial self-funded project I worked on with Director He this year (this single diagram was revised countless times under Dr. Li’s guidance).

And the computing resources we have gradually accumulated:

Hopefully next year I will have the chance to show off (brag) about the results our team has published.

6. When will radiologists become obsolete?

That being said, with AI so powerful and advancing so rapidly today, many people pose this question.

Imaging AI has advantages that humans cannot match. As early as 2010, in a human-machine contest for imaging diagnosis, Mayo Clinic experts were defeated by Google’s programmers. Imaging AI’s sensitivity, accuracy, stability, and speed in disease detection are intimidating, and it never gets tired — operating 24/7 year-round as an unstoppable contender. What can humans compare?

What then is the core value of radiologists? Opinions vary widely. I consider myself an optimistic realist: from a technical standpoint, AI fully surpassing human doctors is historically inevitable, and internists may face this storm even earlier than radiologists.

So where does my optimism come from? First, strong artificial intelligence has not yet appeared; second, when it does, the disappearance of many professions will actually be one of the most negligible things society faces.

The capability of imaging AI depends heavily on scan image quality. In the foreseeable future, human doctors will be the key figures deciding image quality, avoiding artifact interference, and ensuring accurate quality control.

Moreover, comprehensive analysis and communication are strengths of human experts. AI can indeed detect suspicious nodules in the lung and analyze and predict nodule benignity or malignancy using big data, morphology, size, texture features, and high-dimensional omics characteristics, but its progress stops here. Medicine is not just cold data.

Patients do not seek a diagnosis label that is simply black or white — they need a personalized plan that includes emotional value, in other words, humanistic care.

“AI recognizes images, while doctors recognize people; AI diagnoses diseases, while doctors treat minds.” This is a phrase often used in our promotional materials.

For example, based on clinical experience and examination indicators, you judge a nodule flagged as high-risk by AI to be controllable in the short term. However, when communicating with the patient who strongly resists surgery, you spend five minutes answering questions eye-to-eye, and finally use an authoritative and unequivocal tone to tell the patient: “Don’t believe the medications online claiming to dissolve nodules. Do nothing for now and return for a follow-up in one year; that’s fine.”

I often do this. Such “talk therapy” usually works well — even more effectively than chatting with current AI for half an hour. This also indirectly confirms the famous saying, “Sometimes curing, often helping, always comforting.”

Of course, human doctors should not pin hopes on ethical stagnation to secure their iron rice bowl. According to Marx and Engels, all ethical systems (and even political systems) have class character and rely on economic base and productive forces development. When they hinder productive forces, reform or reconstruction naturally occurs. Looking back to Deng Xiaoping’s Southern Tour just over thirty years ago, how many earth-shaking changes have taken place in biology, medicine, communications, information, and AI fields?

Imaging medicine is a relatively young specialty, yet among clinical medicine disciplines, it is one of the most affected by technical revolutions in terms of professional connotation changes.

For example, as of today, nearly 20 manufacturers’ imaging AI products for bone age assessment have successfully obtained NMPA Class III certification. How long can those old-fashioned technicians endure using GP atlas methods to spend half an hour assessing bone age and charge 35 yuan with the hospital deducting a few cents?

How far can those imaging doctors mechanically copy and paste AI diagnostic results into report editing boxes and then click submit ride the wave of automation?

Therefore, I still believe the future of imaging doctors lies in clinical practice, multidisciplinary collaboration, interventional diagnosis and treatment, and certainly in designing higher-level medical information architectures.

Of course, if you ask me today whether this specialty is worth entering, my answer aligns with Zhang Xuefeng’s: “Don’t enroll in medical imaging; sooner or later it will be replaced by AI.”