<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article">
  <front>
    <journal-meta>
      <journal-id journal-id-type="nlm-ta">REA Press</journal-id>
      <journal-id journal-id-type="publisher-id">Null</journal-id>
      <journal-title>REA Press</journal-title><issn pub-type="ppub">3042-0180</issn><issn pub-type="epub">3042-0180</issn><publisher>
      	<publisher-name>REA Press</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">https://doi.org/10.22105/scfa.v2i2.63 </article-id>
      <article-categories>
        <subj-group subj-group-type="heading">
          <subject>Research Article</subject>
        </subj-group>
        <subj-group><subject>Incremental learning, Action recognition, Knowledge distillation, Deep learning.</subject></subj-group>
      </article-categories>
      <title-group>
        <article-title>Video Class-Incremental Learning for Action Recognition</article-title><subtitle>Video Class-Incremental Learning for Action Recognition</subtitle></title-group>
      <contrib-group><contrib contrib-type="author">
	<name name-style="western">
	<surname>Mohanapriya</surname>
		<given-names>E. </given-names>
	</name>
	<aff>Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam, Tamil Nadu, India.</aff>
	</contrib><contrib contrib-type="author">
	<name name-style="western">
	<surname>Mirnalinee</surname>
		<given-names>T.T. </given-names>
	</name>
	<aff>Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Kalavakkam, Tamil Nadu, India.</aff>
	</contrib></contrib-group>		
      <pub-date pub-type="ppub">
        <month>06</month>
        <year>2025</year>
      </pub-date>
      <pub-date pub-type="epub">
        <day>17</day>
        <month>06</month>
        <year>2025</year>
      </pub-date>
      <volume>2</volume>
      <issue>2</issue>
      <permissions>
        <copyright-statement>© 2025 REA Press</copyright-statement>
        <copyright-year>2025</copyright-year>
        <license license-type="open-access" xlink:href="http://creativecommons.org/licenses/by/2.5/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</p></license>
      </permissions>
      <related-article related-article-type="companion" vol="2" page="e235" id="RA1" ext-link-type="pmc">
			<article-title>Video Class-Incremental Learning for Action Recognition</article-title>
      </related-article>
	  <abstract abstract-type="toc">
		<p>
			In the domain of video-based action recognition, overcoming catastrophic forgetting while continuously learning new classes remains a major challenge. We propose a Video Class-Incremental Learning (VCIL) framework that addresses this issue by employing a teacher-student knowledge distillation strategy. Our approach leverages both response-based distillation, which aligns the student model’s predictions with the teacher’s softened outputs, and feature-based distillation, which ensures the student retains internal feature representations learned by the teacher. With the UCF101 action recognition dataset and a 3D ResNet backbone, our approach extracts spatiotemporal features to recognize actions in multiple incremental steps. Our model is tested with various settings (10×5, 5×10, 2×25) and has high accuracy for retaining knowledge of past classes and learning new courses. The results show that our approach is efficient in preventing forgetting and maintaining high performance on new tasks.
		</p>
		</abstract>
    </article-meta>
  </front>
  <body></body>
  <back>
    <ack>
      <p>Null</p>
    </ack>
  </back>
</article>