I've been trying to get into mining for a while now, but I just seem to be running into never ending problems with my hardware. I've been following these instructions for the mining part:
https://www.meebey.net/posts/ethereum_gpu_mining_on_linux_howto/
https://medium.com/@oconnorct1/getting-started-mining-ethereum-on-ubuntu-january-2017-148c53f8793b
And these instructions for the NVIDIA CUDA toolkit part:
How can I install CUDA on Ubuntu 16.04?
http://docs.nvidia.com/cuda/cuda-installation-guide-linux/#axzz4WNL7OgLr
After many tries, I've managed to (successfully) install the CUDA drivers, with the following output on my deviceQuery:
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 970"
CUDA Driver Version / Runtime Version 9.1 / 9.1
CUDA Capability Major/Minor version number: 5.2
Total amount of global memory: 4037 MBytes (4233101312 bytes)
(13) Multiprocessors, (128) CUDA Cores/MP: 1664 CUDA Cores
GPU Max Clock rate: 1329 MHz (1.33 GHz)
Memory Clock rate: 3505 Mhz
Memory Bus Width: 256-bit
L2 Cache Size: 1835008 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.1, CUDA Runtime Version = 9.1, NumDevs = 1
Result = PASS
Since, the last line says Result = PASS
, I assumed the installation was a success. This is my nvidia-smi -a
output
==============NVSMI LOG==============
Timestamp : Thu Jan 4 15:07:41 2018
Driver Version : 387.26
Attached GPUs : 1
GPU 0000:01:00.0
Product Name : GeForce GTX 970
Display Mode : Enabled
Persistence Mode : Disabled
Driver Model
Current : N/A
Pending : N/A
Serial Number : N/A
GPU UUID : GPU-a3b994a4-62dc-fa4d-1a3e-1cfbb5d46276
VBIOS Version : 84.04.2F.00.81
Inforom Version
Image Version : N/A
OEM Object : N/A
ECC Object : N/A
Power Management Object : N/A
GPU Operation Mode
Current : N/A
Pending : N/A
PCI
Bus : 0x01
Device : 0x00
Domain : 0x0000
Device Id : 0x13C210DE
Bus Id : 0000:01:00.0
Sub System Id : 0x366A1458
GPU Link Info
PCIe Generation
Max : 3
Current : 3
Link Width
Max : 16x
Current : 16x
Fan Speed : 34 %
Performance State : P0
Clocks Throttle Reasons
Idle : Not Active
User Defined Clocks : Not Active
SW Power Cap : Not Active
HW Slowdown : Not Active
Unknown : N/A
Memory Usage
Total : 4037 MB
Used : 808 MB
Free : 3229 MB
Compute Mode : Default
Utilization
Gpu : 1 %
Memory : 1 %
Ecc Mode
Current : N/A
Pending : N/A
ECC Errors
Volatile
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Total : N/A
Aggregate
Single Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Total : N/A
Double Bit
Device Memory : N/A
Register File : N/A
L1 Cache : N/A
L2 Cache : N/A
Texture Memory : N/A
Total : N/A
Temperature
Gpu : 34 C
Power Readings
Power Management : Supported
Power Draw : 58.17 W
Power Limit : 250.00 W
Default Power Limit : 250.00 W
Min Power Limit : 100.00 W
Max Power Limit : 280.00 W
Clocks
Graphics : 1177 MHz
SM : 1177 MHz
Memory : 3505 MHz
Applications Clocks
Graphics : 1177 MHz
Memory : 3505 MHz
Max Clocks
Graphics : 1519 MHz
SM : 1519 MHz
Memory : 3505 MHz
Compute Processes : None
Nonetheless, when I run ethminer -M -G
, I get the following output
X server found. dri2 connection failed!
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [22]
param: 4, val: 0
X server found. dri2 connection failed!
DRM_IOCTL_I915_GEM_APERTURE failed: Invalid argument
Assuming 131072kB available aperture size.
May lead to reduced performance or incorrect rendering.
get chip id failed: -1 [22]
param: 4, val: 0
beignet-opencl-icd: no supported GPU found, this is probably the wrong opencl-icd package for this hardware
(If you have multiple ICDs installed and OpenCL works, you can ignore this message)
No GPU device with sufficient memory was found. Can't GPU mine. Remove the -G argument
There seem to be quite a few people encountering similar problems, but nowhere did I find a comprehensive solution. I've found so far this guy:
https://github.com/longcw/yolo2-pytorch/issues/15
Who was directed to a solution, I can't fully comprehend:
https://github.com/longcw/yolo2-pytorch/issues/10
Not usually the one to ask, but this has been bugging me for weeks now. I'd appreciate any help! Thx in advance and have a good one!