FaceLandmarkTracker is a high-performance face landmark detection and tracking library built on OpenCV and DeepCore (Deepixels proprietary library). It leverages TensorFlow Lite models for real-time face detection, landmark extraction, and head pose estimation. A Python wrapper is included for easy integration.
FaceLandmarkTracker outputs an array of 106 keypoints corresponding to facial landmarks.
Each landmark has a fixed index, which you can use to identify facial regions such as eyes, nose, lips, and jawline.
Below is an illustration showing the landmark indexing scheme:
(Example: numbers correspond to keypoint indices returned by get_keypoints())
- Real-time face landmark tracking
- Face bounding box extraction
- Head pose estimation
- Visibility confidence per keypoint
- Debug visualization of landmarks
- Easy Python integration
- Works on CPU only — no GPU required
- Supported Python versions: 3.9, 3.10, 3.11, 3.12
Install via pip using the .whl file that matches your Python version:
# Python 3.10
pip install deeppy-2.19.459-cp310-cp310-win_amd64.whl
# Python 3.11
pip install deeppy-2.19.459-cp311-cp311-win_amd64.whlMake sure the
cpXXXin the filename matches your Python version.
FaceLandmarkTracker is optimized for real-time performance on CPU. Typical inference speeds:
| Environment | Resolution | FPS |
|---|---|---|
| Noteboook CPU (Intel i7 11th Gen) | 640x480 | 200 |
| Desktop CPU (Intel i7 11th Gen) | 640x480 | 330 |
- No GPU required — runs efficiently on modern CPUs
- Real-time performance with webcam streams
- Benchmarks may vary depending on CPU model and input resolution
⚠️ Note: Performance may be slightly lower when displaying debug visualization.
- Example Python code for capturing and processing a live camera stream.
import cv2
from deeppy import FaceLandmarkTracker
# Path to your license file
license_path="dp_face_2025.lic"
def run_face_tracker_camera():
dp_face = FaceLandmarkTracker()
dp_face.init(license_path)
cap = cv2.VideoCapture(0)
if not cap.isOpened():
print("Failed to open webcam.")
return
while True:
ret, frame = cap.read()
if not ret:
break
dp_face.run(frame, 0.2, False)
print("Keypoints:", dp_face.get_keypoints())
print("Rect:", dp_face.get_rect())
print("Pose:", dp_face.get_pose())
frame = dp_face.display_debug(frame)
cv2.imshow("FaceLandmarkTracker", frame)
if cv2.waitKey(1) & 0xFF == 27: # ESC key
break
cap.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
run_face_tracker_camera()- Example Python code for processing images.
import cv2
from deeppy import FaceLandmarkTracker
# Path to your license file
license_path="dp_face_2025.lic"
def run_face_tracker_image(image_paths):
dp_face = FaceLandmarkTracker()
dp_face.init(license_path)
for path in image_paths:
frame = cv2.imread(path)
if frame is None:
print("Failed to load image.")
return
dp_face.run(frame, 0.2, True)
print("Keypoints:", dp_face.get_keypoints())
print("Rect:", dp_face.get_rect())
print("Pose:", dp_face.get_pose())
frame = dp_face.display_debug(frame)
cv2.imshow("FaceLandmarkTracker", frame)
if cv2.waitKey(0) & 0xFF == 27: # ESC key
break
cv2.destroyAllWindows()
if __name__ == "__main__":
image_paths = ["example.jpg","example2.jpg"]
run_face_tracker_image(image_paths)license_path = "dp_face_2025.lic"
dp_face = FaceLandmarkTracker()
if dp_face.init(license_path):
print("Model loaded successfully")
else:
print("Model initialization failed")-
license_path– Path to the required license file (.lic). -
init(license_path)– Loads the face tracking models using the given license.- Returns
Trueif the models are loaded successfully. - Raises an exception if the license file path is invalid or the license is not valid.
- Returns
dp_face.run(image_src, fThresh, isStill)image_src– input image (numpy array)fThresh– confidence threshold, the common value is around 0.2isStill– whether image is static
keypoints = dp_face.get_keypoints()
rect = dp_face.get_rect()
visibility = dp_face.get_visibility()
pose = dp_face.get_pose()get_keypoints()– returns 106 x 2 array of facial landmarksget_rect()– returns bounding box[x, y, width, height]where x,y is the top left corner of the bounding box.get_visibility()– visibility confidence per landmarkget_pose()– estimated head pose[pitch, yaw, roll]
image_out = dp_face.display_debug(image_src)- Draws landmarks, bounding boxes, and pose axes on the image
This library is proprietary and requires a paid license. You may not use, distribute, or modify it without a valid license.
- Contact us via email: support@deepixel.xyz
The face image used in this README/demo was generated by This Person Does Not Exist. This image is synthetic and does not depict a real person.