However, when I try it now, I’m getting strange errors.
nvbuf_utils: Could not get EGL display connection
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:557 No cameras available
Got EOS from element "pipeline0".
Execution ended after 0:00:00.127232917
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
(Argus) Error EndOfFile: Unexpected error in reading socket (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 266)
(Argus) Error EndOfFile: Receive worker failure, notifying 1 waiting threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 340)
(Argus) Error InvalidState: Argus client is exiting with 1 outstanding client threads (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadCore(), line 357)
(Argus) Error EndOfFile: Receiving thread terminated with error (in src/rpc/socket/client/ClientSocketManager.cpp, function recvThreadWrapper(), line 368)
(Argus) Error EndOfFile: Client thread received an error from socket (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 145)
(Argus) Error EndOfFile: (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 87)
I’ve also tried with nvgstcapture-1.0 --automate --capture-auto --sensor-id=0 and get similar results.
I have confirmed by device-tree is setup properly, when used with Jetpack 4.6.1, the same command works as expected. I’m running a recently compiled BalenaOS image (balenaOS 2.88.4+rev17 with 4.9.253-l4t-r32.6).
Does anyone have a confirmed working setup that allows them to capture images from a CSI camera?
I got the PiCamera 2.1 working on the jetson Nano with balena about 2 years ago. Last working code snippets are about 1.2 years old. Therefore my code snippets might be a bit outdate. I am currently running the PiHQ camera (IMX477) on the Jetson NX with a heavily modified BalenaOS, so i can not provide recent code snippets on the topic.
Here are a few of my assumptions, please check them before copying any code:
I guess that @smithandrewc wants to do everything inside the container and not touch the BalenaOS. Furthermore i guess that you @smithandrewc are using the raspberry pi camera v2.1 (imx219) on the jetson Nano.
Here is a dump of my **very old** Dockerfile:
We are probably running an outdated version of balenalib/jetson-nano-ubuntu:bionic - last commit by me was on the 22 Feb, 2020. Very deprecated!!!
Overall i think your error message indicates that the nvargus-daemon is not running. Could you check that the deamon is running?
Any how using the CSI connector with the IMX219 (part of the kernel) was straight forward and should still be working. Only the nvargus-daemon and the nvidia software side is sometime a bit difficult with strange or misleading error messages.
If you are using the PiCamera HQ (IMX477) things are going to get a bit more difficult.
@smithandrewc what jetPack version are you running? I remember from our transition to the Jetson NX, that we could not get nvjpegenc or nvpngenc to run.
does this provide you with images?
# not miss to run nvargus-daemon & in background
import cv2
def gstreamer_pipeline(
capture_width=1280,
capture_height=720,
display_width=1280,
display_height=720,
framerate=60,
flip_method=0,
):
return (
"nvarguscamerasrc ! "
"video/x-raw(memory:NVMM), "
"width=(int)%d, height=(int)%d, "
"format=(string)NV12, framerate=(fraction)%d/1 ! "
"nvvidconv flip-method=%d ! "
"video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
"videoconvert ! "
"video/x-raw, format=(string)BGR ! appsink"
% (
capture_width,
capture_height,
framerate,
flip_method,
display_width,
display_height,
)
)
cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)
cap.isOpened()
success, image = cap.read()
count = 0
while success:
cv2.imwrite("frame%d.jpg" % count, image) # save frame as JPEG file
success,image = cap.read()
print ('Read a new frame: ', success)
count += 1