なんでなんで?と思ってたら、以下の記事発見。
Yahoo!メールヘルプ
Yahoo!ダイレクトオファーヘルプ
抜粋すると・・・
ls /dev/tty.* /dev/tty.Bluetooth-Incoming-Portこれは、再起動などで回復するようです。
ls /dev/tty.* /dev/tty.Bluetooth-Incoming-Port /dev/tty.usbmodem14101次に、リセットボタンを押した際に、Bluetoothの方を認識してしまうパターンです。
*** Caterina device connected Found port: /dev/cu.Bluetooth-Incoming-Port *** Attempting to flash, please don't remove device >>> avrdude -p atmega32u4 -c avr109 -U flash:w:xxx.hex:i -P /dev/cu.Bluetooth-Incoming-Port -C avrdude.conf avrdude: warning at avrdude.conf:14976: part atmega32u4 overwrites previous definition avrdude.conf:11487. Connecting to programmer: .avrdude: butterfly_recv(): programmer is not responding *** Caterina device disconnectedこれは、なんていうか、何回かやってるとUSBを認識するので、すかさずFLASHです。(なんだそれ😄
*** Caterina device connected Found port: /dev/cu.usbmodem14101 *** Attempting to flash, please don't remove device >>> avrdude -p atmega32u4 -c avr109 -U flash:w:/xxx.hex:i -P /dev/cu.usbmodem14101 -C avrdude.conf avrdude: warning at avrdude.conf:14976: part atmega32u4 overwrites previous definition avrdude.conf:11487. Connecting to programmer: . Found programmer: Id = "CATERIN"; type = S Software Version = 1.0; No Hardware Version given. Programmer supports auto addr increment. Programmer supports buffered memory access with buffersize=128 bytes. Programmer supports the following devices: Device code: 0x44 avrdude: AVR device initialized and ready to accept instructions Reading | ################################################## | 100% 0.00s avrdude: Device signature = 0x1e9587 avrdude: NOTE: "flash" memory has been specified, an erase cycle will be performed To disable this feature, specify the -D option. avrdude: erasing chip avrdude: reading input file "/xxx.hex" avrdude: writing flash (15502 bytes): Writing | ################################################## | 100% 1.21s avrdude: 15502 bytes of flash written avrdude: verifying flash memory against /xxx.hex: avrdude: load data flash data from input file /xxx.hex: avrdude: input file /xxx.hex contains 15502 bytes avrdude: reading on-chip flash data: Reading | ################################################## | 100% 0.11s avrdude: verifying ... avrdude: 15502 bytes of flash verified avrdude done. Thank you. *** Caterina device disconnectedちゃんと調べればなにかあるのかもしれませんが、解決策は「数撃ちゃ当たる戦法」でした。
import tensorflow as tf import numpy as np import os,sys import cv2 from PIL import Image, ImageEnhance,ImageDraw from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) def weight_variable(shape): initial = tf.truncated_normal(shape, stddev=0.1) return tf.Variable(initial) def bias_variable(shape): initial = tf.constant(0.1, shape=shape) return tf.Variable(initial) def conv2d(x, W): return tf.nn.conv2d(x,W,strides=[1, 1, 1, 1],padding='SAME') def max_pool_2x2(x): return tf.nn.max_pool(x,ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1],padding='SAME') x = tf.placeholder(tf.float32, shape=[None, 784]) y_ = tf.placeholder(tf.float32, shape=[None, 10]) # 画像をリシェイプ 第2引数は画像数(-1は元サイズを保存するように自動計算)、縦x横、チャネル x_image = tf.reshape(x, [-1, 28, 28, 1]) print(x_image) ### 1層目 畳み込み層 # 畳み込み層のフィルタ重み、引数はパッチサイズ縦、パッチサイズ横、入力チャネル数、出力チャネル数 # 5x5フィルタで32チャネルを出力(入力は白黒画像なので1チャンネル) W_conv1 = weight_variable([5, 5, 1, 32]) # 畳み込み層のバイアス b_conv1 = bias_variable([32]) # 活性化関数ReLUでの畳み込み層を構築 h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) ### 2層目 プーリング層 # 2x2のマックスプーリング層を構築 h_pool1 = max_pool_2x2(h_conv1) ### 3層目 畳み込み層 # パッチサイズ縦、パッチサイズ横、入力チャネル、出力チャネル # 5x5フィルタで64チャネルを出力 W_conv2 = weight_variable([5, 5, 32, 64]) b_conv2 = bias_variable([64]) h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) ### 4層目 プーリング層 h_pool2 = max_pool_2x2(h_conv2) ### 5層目 全結合層 # オリジナル画像が28x28で、今回畳み込みでpadding='SAME'を指定しているため # プーリングでのみ画像サイズが変わる。2x2プーリングで2x2でストライドも2x2なので # 縦横ともに各層で半減する。そのため、28 / 2 / 2 = 7が現在の画像サイズ # 全結合層にするために、1階テンソルに変形。画像サイズ縦と画像サイズ横とチャネル数の積の次元 # 出力は1024(この辺は決めです) あとはSoftmax Regressionと同じ W_fc1 = weight_variable([7 * 7 * 64, 1024]) b_fc1 = bias_variable([1024]) h_pool2_flat = tf.reshape(h_pool2, [-1, 7 * 7 * 64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) # ドロップアウトを指定 keep_prob = tf.placeholder(tf.float32) h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) ### 6層目 Softmax Regression層 W_fc2 = weight_variable([1024, 10]) b_fc2 = bias_variable([10]) y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2) ### 訓練 ### cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv)) train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() config = tf.ConfigProto() config.gpu_options.allow_growth = True config.gpu_options.visible_device_list="0" with tf.Session(config=config) as sess: sess.run(tf.initialize_all_variables()) os.makedirs("./model_2",exist_ok=True) ckpt = tf.train.get_checkpoint_state('./model_2') if ckpt: saver.restore(sess, ckpt.model_checkpoint_path) else: for i in range(1500): batch = mnist.train.next_batch(50) if i%100 == 0: train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1],keep_prob: 1.0}) print("step %d, training accuracy %g"%(i, train_accuracy)) train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) saver.save(sess, "./model_2/model.ckpt") num_test = len(mnist.test.labels) sum_accuracy = 0 for i in range(0, num_test, 50): sum_accuracy = sum_accuracy + accuracy.eval(feed_dict={x: mnist.test.images[i:i+50], y_: mnist.test.labels[i:i+50], keep_prob: 1.0}) print("test accuracy:", sum_accuracy/(num_test/50)) GST_STR = 'nvarguscamerasrc \ ! video/x-raw(memory:NVMM), width=320, height=240, format=(string)NV12, framerate=(fraction)10/1 \ ! nvvidconv ! video/x-raw, width=(int)320, height=(int)240, format=(string)BGRx \ ! videoconvert \ ! appsink' WINDOW_NAME = 'Camera Test' cap = cv2.VideoCapture(GST_STR, cv2.CAP_GSTREAMER) while True: ret, img = cap.read() if ret != True: break pilImg = Image.fromarray(np.uint8(img)) #print(pilImg.format, pilImg.size, pilImg.mode) img_width, img_height = pilImg.size draw = ImageDraw.Draw(pilImg) draw.rectangle(((img_width - 150) // 2, (img_height - 150) // 2, (img_width + 150) // 2, (img_height + 150) // 2), outline=(255, 255, 255)) mnistImg = Image.fromarray(np.uint8(img)) mnistImg = mnistImg.crop(((img_width - 150) // 2, (img_height - 150) // 2, (img_width + 150) // 2, (img_height + 150) // 2)) mnistImg = mnistImg.convert("L") color = ImageEnhance.Color(mnistImg) mnistImg = color.enhance(1.5) contrast = ImageEnhance.Contrast(mnistImg) mnistImg = contrast.enhance(1.5) brightness = ImageEnhance.Brightness(mnistImg) mnistImg = brightness.enhance(1.5) sharpness = ImageEnhance.Sharpness(mnistImg) mnistImg = sharpness.enhance(1.5) mnistImg = mnistImg.resize((28, 28), Image.LANCZOS) #print(mnistImg) mnistImg = map(lambda x: 255 - x, mnistImg.getdata()) mnistImg = np.fromiter(mnistImg, dtype=np.uint8) mnistImg = mnistImg.reshape(1, 784) mnistImg = mnistImg.astype(np.float32) mnistImg = np.multiply(mnistImg, 1.0 / 255.0) # print(mnistImg) for i in range(len(mnistImg[0])): num=mnistImg[0][i] if num<0.4: print(' ', end="") else: print('%03d ' % (num*1000), end="") if i % 28 == 0: print('\n') #学習データと読み込んだ数値との比較を行う pred = sess.run(y_conv, feed_dict={x:mnistImg, y_: [[0.0] * 10], keep_prob: 1.0})[0] # print(pred) if not np.max(pred) < 0.5 : print(np.argmax(pred) ,np.max(pred)) draw.text((10,10),f'{np.argmax(pred)} {round(np.max(pred)*100,2)}%') imgArray = np.asarray(pilImg) cv2.imshow(WINDOW_NAME, imgArray) key = cv2.waitKey(10) if key == 27: # ESC break起動すると、ウインドウが開いてカメラの画像が出ます。
$ cd ~/ $ mkdir mnist $ cd mnist
import tensorflow as tf import numpy as np # mnistのの学習データがMNIST_dataフォルダになければダウンロードしてから読み込み # あったらそのまま読み込み from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) # MNIST画像をランダムで1枚読み込んでみる for listitem in mnist.train.next_batch(1)[0]: for i in range(len(listitem)): num=listitem[i] if num==0: print(' ', end="") else: print('%03d ' % (num*1000), end="") if i % 28 == 0: print('\n') print('\n') # 訓練画像入れ # 訓練画像(28x28px)を1行784列の配列に格納する # https://www.tensorflow.org/api_docs/python/tf/placeholder x = tf.placeholder(tf.float32, shape=[None, 784]) # 重み # 訓練画像のpx数の行、ラベル(0-9の数字の個数)数の列の行列 # 0埋めしておく(tf.zeros) # https://www.tensorflow.org/api_docs/python/tf/Variable # https://www.tensorflow.org/api_docs/python/tf/zeros W = tf.Variable(tf.zeros([784,10])) # バイアス # ラベル数の列の行列 # 0埋めしておく(tf.zeros) # https://www.tensorflow.org/api_docs/python/tf/Variable # https://www.tensorflow.org/api_docs/python/tf/zeros b = tf.Variable(tf.zeros([10])) # ソフトマックス回帰を実行 # yは入力x(画像)に対しそれがある数字である確率の分布 # matmul関数で行列xとWの掛け算を行った後、bを加算する。 # yは[1, 10]の行列 # https://www.tensorflow.org/api_docs/python/tf/nn/softmax # https://www.tensorflow.org/api_docs/python/tf/linalg/matmul y = tf.nn.softmax(tf.matmul(x,W) + b) # 正解用ラベル入れ # https://www.tensorflow.org/api_docs/python/tf/placeholder y_ = tf.placeholder(tf.float32, shape=[None, 10]) # 交差エントロピー # 誤差関数の交差エントロピー誤差関数を用意 # https://www.tensorflow.org/api_docs/python/tf/math/reduce_sum cross_entropy = -tf.reduce_sum(y_*tf.log(y)) # 勾配硬化法を用い交差エントロピーが最小となるようyを最適化する # 学習方法を定義 0.01は学習率 # https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy) # 用意した変数Veriableの初期化を実行する # https://www.tensorflow.org/api_docs/python/tf/initialize_all_variables init = tf.initialize_all_variables() # Sessionを開始する # runすることで初めて実行開始される(run(init)しないとinitが実行されない) # https://www.tensorflow.org/api_docs/python/tf/InteractiveSession sess = tf.InteractiveSession() sess.run(init) # 正しいかの予測 # 計算された画像がどの数字であるかの予測yと正解ラベルy_を比較する # 同じ値であればTrueが返される # argmaxは配列の中で一番値の大きい箇所のindexが返される # 一番値が大きいindexということは、それがその数字である確率が一番大きいということhttp://blogspot.com/ # Trueが返ってくるということは訓練した結果と回答が同じということ correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) # 精度の計算 # correct_predictionはbooleanなのでfloatにキャストし、平均値を計算する # Trueならば1、Falseならば0に変換される # https://www.tensorflow.org/api_docs/python/tf/math/reduce_mean accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print("before training accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels})) # 1000回の訓練(train_step)を実行する # next_batch(50)で50つのランダムな訓練セット(画像と対応するラベル)を選択する # feed_dictでplaceholderに値を入力することができる for i in range(1000): batch = mnist.train.next_batch(50) if i%100 == 0: train_accuracy = accuracy.eval(feed_dict={x:batch[0], y_: batch[1]}) print("step %d, training accuracy %g"%(i, train_accuracy)) train_step.run(feed_dict={x: batch[0], y_: batch[1]}) # モデルの評価 print("test accuracy %g"%accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
$ python3 mnist.py
python3 mnist.py 070 741 588 384 184 098 003 184 345 823 1000 564 054 682 992 992 992 992 890 607 776 992 992 992 992 992 282 713 992 992 992 992 992 992 992 992 992 992 886 419 011 247 980 992 992 992 992 992 992 792 666 435 066 956 992 898 078 078 078 078 031 839 992 964 211 192 992 992 796 447 164 305 388 062 062 062 062 019 109 929 992 992 992 992 992 992 992 992 992 992 749 164 549 992 992 992 992 992 992 992 992 992 992 992 913 447 356 501 788 788 780 384 384 450 788 788 937 992 890 423 145 913 992 968 145 498 992 992 596 490 992 992 768 490 992 992 768 227 278 490 992 992 768 070 862 984 611 490 490 149 035 184 737 992 992 737 105 925 992 992 992 992 992 796 670 670 670 670 901 992 992 964 117 403 752 980 992 992 992 992 992 992 992 992 992 992 984 745 254 392 823 992 992 992 992 992 992 992 992 635 047 356 584 745 992 992 992 701 388 058 2019-04-25 13:10:24.417408: W tensorflow/core/platform/profile_utils/cpu_utils.cc:98] Failed to find bogomips in /proc/cpuinfo; cannot determine CPU frequency 2019-04-25 13:10:24.419881: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x30de2ef0 executing computations on platform Host. Devices: 2019-04-25 13:10:24.419973: I tensorflow/compiler/xla/service/service.cc:168] StreamExecutor device (0): <undefined>, <undefined> 2019-04-25 13:10:24.495340: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:965] ARM64 does not support NUMA - returning NUMA node zero 2019-04-25 13:10:24.495643: I tensorflow/compiler/xla/service/service.cc:161] XLA service 0x2f6e9ff0 executing computations on platform CUDA. Devices: 2019-04-25 13:10:24.495706: I tensorflow/compiler/xla/service/service.cc:168] StreamExecutor device (0): NVIDIA Tegra X1, Compute Capability 5.3 2019-04-25 13:10:24.496060: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties: name: NVIDIA Tegra X1 major: 5 minor: 3 memoryClockRate(GHz): 0.9216 pciBusID: 0000:00:00.0 totalMemory: 3.86GiB freeMemory: 742.95MiB 2019-04-25 13:10:24.496126: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0 2019-04-25 13:10:25.773266: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-04-25 13:10:25.773363: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-04-25 13:10:25.773402: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-04-25 13:10:25.773596: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 273 MB memory) -> physical GPU (device: 0, name: NVIDIA Tegra X1, pci bus id: 0000:00:00.0, compute capability: 5.3) 2019-04-25 13:10:25.863593: I tensorflow/stream_executor/dso_loader.cc:153] successfully opened CUDA library libcublas.so.10.0 locally before training accuracy 0.098 step 0, training accuracy 0.06 step 100, training accuracy 0.86 step 200, training accuracy 0.8 step 300, training accuracy 0.92 step 400, training accuracy 0.94 step 500, training accuracy 0.9 step 600, training accuracy 0.94 step 700, training accuracy 0.88 step 800, training accuracy 0.88 step 900, training accuracy 0.94 test accuracy 0.9175
$ pip3 install --upgrade cython $ sudo apt-get -yV install libopenblas-dev $ sudo apt-get -yV install liblapacke-dev $ sudo apt-get -yV install gfortran $ sudo apt-get -yV install llvm-7* $ echo 'export PATH="/usr/lib/llvm-7/bin:$PATH"' >> ~/.bash_profile $ source ~/.bash_profile
$ git clone https://github.com/ildoonet/tf-pose-estimation.git $ cd tf-pose-estimation $ pip3 install -r requirements.txt
Successfully built dill fire matplotlib numba psutil pycocotools scikit-image msgpack numpy pyzmq tabulate termcolor kiwisolver PyWavelets networkx pillow Failed to build scipy llvmlite Installing collected packages: argparse, dill, six, fire, cycler, setuptools, kiwisolver, numpy, pyparsing, python-dateutil, matplotlib, llvmlite, numba, psutil, pycocotools, urllib3, idna, chardet, certifi, requests, PyWavelets, pillow, imageio, decorator, networkx, scipy, scikit-image, slidingwindow, tqdm, msgpack, msgpack-numpy, pyzmq, tabulate, termcolor, tensorpack
$ cd tf_pose/pafprocess $ sudo apt install swig $ swig -python -c++ pafprocess.i && python3 setup.py build_ext --inplace
$ pip3 install tensorflow-gpu Could not find a version that satisfies the requirement tensorflow-gpu (from versions: )
$ sudo apt-get install libhdf5-serial-dev hdf5-tools $ sudo apt-get install python3-pip $ sudo apt-get install zlib1g-dev zip libjpeg8-dev libhdf5-dev $ sudo pip3 install -U numpy grpcio absl-py py-cpuinfo psutil portpicker grpcio six mock requests gast h5py astor termcolor $ sudo pip3 install --pre --extra-index-url https://developer.download.nvidia.com/compute/redist/jp/v42 tensorflow-gpu
$ python3 Python 3.6.7 (default, Oct 22 2018, 11:32:17) [GCC 8.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf >>> print(tf.__version__) 1.13.1 >>>exit()
$ cd ~/tf-pose-estimation $ cd models/graph/cmu $ bash download.sh
$ pip3 install tqdm $ pip3 install slidingwindow $ pip3 install pycocotools
$ python3 run_webcam.py --model=mobilenet_thin --resize=432x368 --camera=1
$ python3 run_webcam.py --model=mobilenet_v2_small --resize=432x368 --camera=1
$ python3 run_webcam.py --model=mobilenet_v2_small --resize=320x176 --camera=1
$ pip3 list --format=columns Package Version ----------------------------- ------------------- absl-py 0.7.1 apt-clone 0.2.1 apturl 0.5.2 asn1crypto 0.24.0 astor 0.7.1 beautifulsoup4 4.6.0 blinker 1.4 Brlapi 0.6.6 certifi 2019.3.9 chardet 3.0.4 cryptography 2.1.4 cupshelpers 1.0 cycler 0.10.0 Cython 0.29.7 defer 1.0.6 devscripts 2.17.12ubuntu1.1 dill 0.2.9 distro-info 0.18ubuntu0.18.04.1 feedparser 5.2.1 fire 0.1.3 gast 0.2.2 gpg 1.10.0 graphsurgeon 0.3.2 grpcio 1.20.0 h5py 2.9.0 html5lib 0.999999999 httplib2 0.9.2 idna 2.8 Keras-Applications 1.0.7 Keras-Preprocessing 1.0.9 keyring 10.6.0 keyrings.alt 3.0 kiwisolver 1.0.1 language-selector 0.1 launchpadlib 1.10.6 lazr.restfulclient 0.13.5 lazr.uri 1.0.3 llvmlite 0.28.0 louis 3.5.0 lxml 4.2.1 macaroonbakery 1.1.3 Mako 1.0.7 Markdown 3.1 MarkupSafe 1.0 matplotlib 3.0.3 mock 2.0.0 numpy 1.16.3 oauth 1.0.1 oauthlib 2.0.6 olefile 0.45.1 PAM 0.4.2 pbr 5.1.3 Pillow 5.1.0 pip 9.0.1 portpicker 1.3.1 protobuf 3.7.1 psutil 5.6.1 py-cpuinfo 5.0.0 pycairo 1.16.2 pycocotools 2.0.0 pycrypto 2.6.1 pycups 1.9.73 pygobject 3.26.1 PyICU 1.9.8 PyJWT 1.5.3 pymacaroons 0.13.0 PyNaCl 1.1.2 pyparsing 2.4.0 pyRFC3339 1.0 python-apt 1.6.3+ubuntu1 python-dateutil 2.8.0 python-debian 0.1.32 python-magic 0.4.16 pytz 2018.3 pyxdg 0.25 PyYAML 3.12 requests 2.21.0 requests-unixsocket 0.1.5 scipy 1.2.1 SecretStorage 2.3.1 setuptools 41.0.1 simplejson 3.13.2 six 1.12.0 slidingwindow 0.0.13 ssh-import-id 5.7 system-service 0.3 systemd-python 234 tensorboard 1.13.1 tensorflow-estimator 1.13.0 tensorflow-gpu 1.13.1+nv19.4 tensorrt 5.0.6.3 termcolor 1.1.0 tqdm 4.31.1 ubuntu-drivers-common 0.0.0 uff 0.5.5 unattended-upgrades 0.1 unidiff 0.5.4 unity-scope-calculator 0.1 unity-scope-chromiumbookmarks 0.1 unity-scope-colourlovers 0.1 unity-scope-devhelp 0.1 unity-scope-firefoxbookmarks 0.1 unity-scope-manpages 0.1 unity-scope-openclipart 0.1 unity-scope-texdoc 0.1 unity-scope-tomboy 0.1 unity-scope-virtualbox 0.1 unity-scope-yelp 0.1 unity-scope-zotero 0.1 urllib3 1.24.2 wadllib 1.3.2 webencodings 0.5 Werkzeug 0.15.2 wheel 0.30.0 xkit 0.0.0 zope.interface 4.3.2
$ wget https://github.com/JetsonHacksNano/gpuGraph.git $ sudo apt-get install python3-matplotlib $ python3 gpuGraph.py
$ ./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights -c 1
$ git clone https://github.com/JetsonHacksNano/CSI-Camera.git $ cd CSI-Camera $ python face_detect.py
$ sudo apt-get install qtbase5-dev build-essential gdebi libopencv-dev $ wget https://github.com/Kitware/CMake/releases/download/v3.13.4/cmake-3.13.4.tar.gz $ tar xzvf cmake-3.13.4.tar.gz $ cd cmake-3.13.4 $ ./configure --qt-gui $ ./bootstrap $ make -j6 $ sudo make install
$ git clone https://github.com/CMU-Perceptual-Computing-Lab/openpose $ cd openpose $ sudo bash ./scripts/ubuntu/install_deps.sh $ mkdir build $ cd build $ cmake .. $ make -j4 $ sudo make install
$ cd ~/ $ ./build/examples/openpose/openpose.bin --write_json outputJSON/ --display 0 --model_folder ./models --video "./examples/media/video.avi" --write_video outputVideo.avi OpenPose demo successfully finished. Total time: 1198.628352 seconds.
$ ./build/examples/openpose/openpose.bin --model_folder ./models Starting OpenPose demo... Configuring OpenPose... Starting thread(s)... Auto-detecting camera index... Detected and opened camera 0. Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0. Gtk-Message: 12:50:18.435: Failed to load module "canberra-gtk-module"
$ sudo apt install canberra-gtk*
$ ./build/examples/openpose/openpose.bin --model_folder ./models
$ ./build/examples/openpose/openpose.bin --model_folder ./models --net_resolution 320x176
$ sudo nvpmodel -m 0 $ sudo jetson_clocks
https://github.com/JetsonHacksNano/installSwapfile $ cd installSwapfile $ ./installSwapfile
$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e
$ cp -a /usr/local/cuda-10.0/samples/ ~/ $ cd ~/samples $ make
$ cd bin/aarch64/linux/release $ ./deviceQuery $ ./oceanFFT $ ./smokeParticles $ ./nbody
$ /usr/share/visionworks/sources/install-samples.sh ~/visionworks/ $ cd ~/visionworks/VisionWorks-1.6-Samples/ $ make
$ cd bin/aarch64/linux/release/ $ ./nvx_demo_hough_transform $ ./nvx_demo_motion_estimation
GPU=1 # 修正 CUDNN=1 # 修正 CUDNN_HALF=1 # 修正 OPENCV=1 # 修正 AVX=0 OPENMP=0 LIBSO=0 ZED_CAMERA=0 # set GPU=1 and CUDNN=1 to speedup on GPU # set CUDNN_HALF=1 to further speedup 3 x times (Mixed-precision on Tensor Cores) GPU: Volta, Xavier, Turing and higher # set AVX=1 and OPENMP=1 to speedup on CPU (if error occurs then set AVX=0) DEBUG=0 ARCH= -gencode arch=compute_30,code=sm_30 \ -gencode arch=compute_35,code=sm_35 \ -gencode arch=compute_50,code=[sm_50,compute_50] \ -gencode arch=compute_52,code=[sm_52,compute_52] \ # -gencode arch=compute_61,code=[sm_61,compute_61] # 修正 OS := $(shell uname) # Tesla V100 # ARCH= -gencode arch=compute_70,code=[sm_70,compute_70] # GeForce RTX 2080 Ti, RTX 2080, RTX 2070, Quadro RTX 8000, Quadro RTX 6000, Quadro RTX 5000, Tesla T4, XNOR Tensor Cores # ARCH= -gencode arch=compute_75,code=[sm_75,compute_75] # Jetson XAVIER # ARCH= -gencode arch=compute_72,code=[sm_72,compute_72] # GTX 1080, GTX 1070, GTX 1060, GTX 1050, GTX 1030, Titan Xp, Tesla P40, Tesla P4 # ARCH= -gencode arch=compute_61,code=sm_61 -gencode arch=compute_61,code=compute_61 # GP100/Tesla P100 - DGX-1 # ARCH= -gencode arch=compute_60,code=sm_60 # For Jetson TX1, Tegra X1, DRIVE CX, DRIVE PX - uncomment: ARCH= -gencode arch=compute_53,code=[sm_53,compute_53] # 修正 # For Jetson Tx2 or Drive-PX2 uncomment: # ARCH= -gencode arch=compute_62,code=[sm_62,compute_62] :
$ PATH=/usr/local/cuda/bin:$PATH make
$ ./darknet usage: ./darknet <function>
$ wget http://pjreddie.com/media/files/vgg-conv.weights $ ./darknet nightmare cfg/vgg-conv.cfg vgg-conv.weights data/scream.jpg 10
$ wget https://pjreddie.com/media/files/yolov3.weights $ ./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg
$ ./darknet detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights
$ sudo apt-get install libv4l-dev v4l-utils $ v4l2-ctl --list-devices vi-output, imx219 6-0010 (platform:54080000.vi:0): /dev/video0 $ v4l2-ctl -d /dev/video0 --all Driver Info (not using libv4l2): Driver name : tegra-video Card type : vi-output, imx219 6-0010 Bus info : platform:54080000.vi:0 Driver version: 4.9.140 Capabilities : 0x84200001 Video Capture Streaming Extended Pix Format Device Capabilities Device Caps : 0x04200001 Video Capture Streaming Extended Pix Format Priority: 2 Video input : 0 (Camera 0: no power) Format Video Capture: Width/Height : 640/480 Pixel Format : 'RG10' Field : None Bytes per Line : 1280 Size Image : 614400 Colorspace : sRGB Transfer Function : Default (maps to sRGB) YCbCr/HSV Encoding: Default (maps to ITU-R 601) Quantization : Default (maps to Full Range) Flags : Camera Controls group_hold 0x009a2003 (bool) : default=0 value=0 flags=execute-on-write sensor_mode 0x009a2008 (int64) : min=0 max=0 step=0 default=0 value=0 flags=slider gain 0x009a2009 (int64) : min=0 max=0 step=0 default=0 value=16 flags=slider exposure 0x009a200a (int64) : min=0 max=0 step=0 default=0 value=13 flags=slider frame_rate 0x009a200b (int64) : min=0 max=0 step=0 default=0 value=2000000 flags=slider bypass_mode 0x009a2064 (intmenu): min=0 max=1 default=0 value=0 override_enable 0x009a2065 (intmenu): min=0 max=1 default=0 value=0 height_align 0x009a2066 (int) : min=1 max=16 step=1 default=1 value=1 size_align 0x009a2067 (intmenu): min=0 max=2 default=0 value=0 write_isp_format 0x009a2068 (bool) : default=0 value=0 sensor_signal_properties 0x009a2069 (u32) : min=0 max=0 step=0 default=0 flags=read-only, has-payload sensor_image_properties 0x009a206a (u32) : min=0 max=0 step=0 default=0 flags=read-only, has-payload sensor_control_properties 0x009a206b (u32) : min=0 max=0 step=0 default=0 flags=read-only, has-payload sensor_dv_timings 0x009a206c (u32) : min=0 max=0 step=0 default=0 flags=read-only, has-payload low_latency_mode 0x009a206d (bool) : default=0 value=0 sensor_modes 0x009a2082 (int) : min=0 max=30 step=1 default=30 value=5 flags=read-only
$ ./darknet detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights
$ wget https://pjreddie.com/media/files/yolov3-tiny.weights $ ./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights