AI-based interview/exam proctoring system with:
- Live webcam monitoring
- Face-count checks (
NO FACE,MULTIPLE FACES) - Head-pose + gaze checks (
LOOKING AWAY) - Real-time web dashboard and violation log
- Audible alert (buzzer) on violations
This project runs locally and serves a browser UI from FastAPI.
- External camera support with robust backend probing (
DSHOW,MSMF,ANY) - Camera selection from UI (
/cameras,/set_camera/<index>) - Real-time status polling and MJPEG video stream
- Alert engine with timeout smoothing
- Trained CNN model support from local weights (
eye_cnn_final.pth) - MediaPipe + automatic OpenCV fallback when MediaPipe fails
server.py: Main FastAPI server (recommended entrypoint)proctor_live.py: Core proctoring engineindex.html,style.css,app.js: Frontend dashboardeye_cnn_model.py: CNN model definitioneye_cnn_final.pth: Final trained gaze model used during proctoringpretrained_eye_cnn.pth: Stage-1 pretrained weightsdataset_loader.py,train_pretrain.py,train_finetune.py: Training pipelinepeople_detector.py: YOLO-based multi-person utilitycapture_dataset.py,collect_images.py: Dataset collection utilitiesPROCTORING_SYSTEM_EXPLAINED.md: Extended architecture explanation
Install all dependencies:
pip install -r requirements.txtMain dependencies:
- FastAPI
- uvicorn
- pydantic
- Flask
- flask-cors
- opencv-python
- torch
- numpy
- mediapipe
- ultralytics
cd /d E:\ritesh_exp\proctoring_fastapi
python -m pip install -r requirements.txt
python server.pyOpen:
http://127.0.0.1:5000
- Open UI in browser.
- Wait for camera list to load.
- Select your external camera index.
- Click Start Proctoring.
- Monitor
SAFE/ALERTstate and violation log.
GET /start→ start proctoring threadGET /stop→ stop proctoring threadGET /status→ current status JSON (SAFEorALERT+ reason)
GET /cameras→ list camera indices that can actually provide framesGET /set_camera/<index>→ switch active camera
GET /video_feed→ MJPEG streamGET /frame→ single JPEG frame
NO FACEwhen no face is detectedMULTIPLE FACESwhen more than one face is detected
- Uses MediaPipe FaceMesh when available
- Computes head orientation (
HEAD LEFT/RIGHT/UP/DOWN) - Uses CNN (
eye_cnn_final.pth) for gaze-like classification from face crop - Triggers
LOOKING AWAYwhen smoothed predictions cross threshold
If MediaPipe import/init fails:
- System falls back to OpenCV Haar face detection
- Proctoring continues (no crash)
NO FACE,MULTIPLE FACES, and CNN-basedLOOKING AWAYstill work- Head-pose checks are unavailable in fallback mode
proctor_live.py loads:
- Model class:
EyeCNNfromeye_cnn_model.py - Weights:
eye_cnn_final.pthfrom the same folder
So yes, your local trained model is used at runtime.
python train_pretrain.pyOutputs pretrained_eye_cnn.pth.
python train_finetune.pyLoads pretrained weights and writes eye_cnn_final.pth.
python capture_dataset.pypython collect_images.py- Close Zoom/Teams/Meet/OBS or any app locking webcam.
- Check available cameras:
- Open
http://127.0.0.1:5000/cameras
- Open
- Set camera manually:
http://127.0.0.1:5000/set_camera/1(replace index)
- Keep webcam connected before starting server.
- Install dependencies from
requirements.txt. - Runtime now automatically falls back to OpenCV if MediaPipe fails.
- Usually VS Code interpreter mismatch.
- Select the same interpreter where you installed requirements.
server.pyis the preferred backend entrypoint.app.pyis an older Flask server variant kept for compatibility.- Buzzer uses
winsound, so alert sound is Windows-oriented.
No license file is currently included in this repository.