Selection | Testing | Setup | Hacking | Top |
Looking at the OpenArena game, the energy used in the test (power 33W x time 1.3 hours) was 42.3 watt-hours or 1.52e5 joules, whereas the reported charge and voltage, multiplied, are 2.47 amp-hours x 14.4 volts = 35.6 watt-hours. Given various conversion losses, roundoff error and so on, I'm calling these two numbers consistent. And therefore it's valid to estimate the battery life in hours as charge (reported in amp-hours) times voltage divided by the measured power in watts.
Clearly Windows knows how to do something with power management that Linux doesn't. Interventions I tried to reduce power:
Taking 3 non-boot CPUs offline had no effect at all on power. Actually there are 2 cores with hyperthread. CPUs that are not doing anything should be halted and should draw very little power.
In /sys/devices/system/cpu/cpufreq/policy0/ (replicated at the same time on CPU 1,2,3): scaling_min_freq = 500000 (0.5GHz) and scaling_max_freq = 2700000 (2.7GHz). I changed scaling_max_freq to 600000, and the governor (powersave) changed scaling_cur_freq to various values close to 800000 (0.8GHz). This has no visible effect on the power used. The run frequency has no effect when the CPU is not running. Also, if the same job is done fast or slow, and if the CPU supply voltage is unchanged, the same amount of energy is used, pretty close. Frequency scaling only changes the power drain if the voltage is reduced (which it normally is, but not radically), or if some work is not completed due to slowness, which is not realistic if the machine is otherwise idle.
Lithium ion (cobalt oxide) batteries (on Wikipedia). All these values are volts per cell.
Memtest86+ reported parameters with one or two memory sticks. Memory speeds are in bytes per second.
Item | Size | 1x 8Gb | 2x 8Gb | Comments |
---|---|---|---|---|
Clock | 2.195 | 2.195 | GHz | |
L1 cache | 32Kb | 1.46e11 | 1.46e11 | |
L2 cache | 256Kb | 3.33e10 | 3.33e10 | |
L3 cache | 3072Kb | 2.31e10 | 2.48e10 | |
Main memory | 8-16Gb | 1.13e10 | 1.43e10 | 1.27x |
Run time | 105 | 180 | Mins/pass |
On the NUC (Iris), adding a second stick of memory raised the memory bandwidth by almost exactly 1.5x. On the Acer, however, the score on the Phoronix memory test was speeded up 1.7x. It's hard to figure on which tasks raw memory bandwidth makes a difference, and how much.
Oops, in SMP mode after about 5 minutes it seems to have gotten stuck, no error message. I restarted the test in the default mode, single core. I'm guessing that memtest86+ imperfectly supports the Broadwell-U chipset, which is fairly new.
I tested suspending to S3 again (by lid close), and the machine would not go down (power still 13W which is about right with the display off). When I opened the lid again I could not wake it with the power button. I had to hold down the power button until it powered off. This occurred twice in succession. The machine was on line power at the time.
When I rebooted, both times, the touchpad was inoperative; the X-server did not recognize it as existing; actually most likely the relevant device driver did not get loaded. I let it incubate about 18 hours with only battery power, and the touchpad is now back. This issue was not seen again. This is a different issue from kernel 4.1.12 not supporting the touchpad.
First hypothesis: If you're on line power and you suspend to S3, the machine gets hosed. I am pretty sure that every suspend on line power has failed. Not sure about hibernate (S4); I think I never tried it on line power. [Another source reports similar failure for S4 on line power.]
How about on battery power? I have suspended many times successfully. This time, I suspended on battery and left the machine overnight, 11 hours. It woke successfully (and I got a power measurement, looking at the battery charge). Then, still on battery, I suspended 15 times in succession by closing and later opening the lid. On the 15th attempt it turned on the lamp, showing the scene saved in video RAM, but user space did not wake. Various maneuvers had no effect and I eventually forcibly rebooted. The next attempt to suspend failed the same way.
Then I tried a similar test to S4, still on battery power. 30 repetitions were all successful. (Which is not a guarantee that the 31st try would also succeed.) It takes 7 secs to go down, and 21 secs to resume; when you use the hibernate button on the logout confirmation box, it starts the screen locker and the 21 secs is to when the locker is ready to read your password. Save and restore times are longer if more RAM is in use, e.g. when testing a game or a movie.
Back to S3: succeeded once, failed on the second try.
Second hypothesis: the behavior on battery power is different, but both issues are probably a BIOS bug. It may be fixed eventually but not soon, because I already installed the latest BIOS.
So what am I going to do; what is the action plan? The goal is to have a great laptop with all its parts working. Issues:
Just because I make a hypothesis doesn't make it true. The problem could be caused at any of these levels:
Let's be realistic: I'm probably not going to be able to get adequate information to evaluate what the problem really is, nor to find out how prevalent it is. It's going to be hard to convince Acer customer support that there really is a problem and that they need to dig into their BIOS (which is outsourced). They certainly will want to see it confirmed in Windows.
I see three paths forward, and I need to concentrate on the right one.
Actions: I'm still in the phase of diagnosing the problem.
Google search: do other people report freezes?
Reinstall Windows (yuck) and test S3.
Then I can make a more intelligent choice of strategy.
Results of Google search for other users having my symptom. Search terms: acer aspire E5-573g suspend freeze. There are few hits, and more false positives for the V5-573G which need to be ignored.
Arch Linux wiki page for Acer Aspire E5-573. The author(s) report that S3 and S4 never work when you're running on AC power. He discovered that the non-boot CPUs are not being taken offline. To take them down in your suspend script:
for c in /sys/devices/system/cpu/cpu*/online; do echo 0 >$c; done
echo 'mem' > /sys/power/state
for c in /sys/devices/system/cpu/cpu*/online; do echo 1 >$c; done
This file is absent for the boot CPU (duh). He doesn't mention intermittent freezes on battery power.
The author warns that BIOS 1.25 is hosed; it refuses to boot Linux. At the time of posting it was necessary to downgrade to 1.15; presently 1.31 and 1.35 have the bug fixed.
He also recommends some kernel parameters (which are off topic for this paragraph), and suggests providing a kernel firmware patch. He tried and failed to use ath9k for Bluetooth (jimc says: needs ath10k_pci).
From the man page for systemd-suspend.service, which is what is eventually invoked by all programs wanting to suspend the machine (also hibernate or hybrid-sleep): Put hack scripts in /usr/lib/systemd/system-sleep/ and they will be executed (in parallel) with 2 args: pre/post, and suspend/hibernate. See also /etc/systemd/sleep.conf; man page systemd-sleep.conf; not useful in this problem. OK, I'm going to take the CPUs offline with a hack script. It worked -- on AC power you can now suspend to RAM or hibernate, and survive.
However, the other bug is still with us. Outcome of repeatedly suspending in various conditions. On all but one test, the procedure was to shut it down, wait for it to go down, leave it down 15 secs, wake it, wait for it to be fully awake, plus 15 secs more, and repeat from the beginning.
S3, battery power, close and open lid. It succeeded 14 times, then on the 15th it froze. The exact behavior is, when you open the lid it turns on the lamp, showing the scene saved in video RAM, but user space was unresponsive: it did not try to restart Wi-fi, did not show corresponding icons, did not update the clock and my strip chart, and the mouse cursor did not move. Various maneuvers did not wake it. The light bulb lamp (underneath left front corner), which should have been blue, instead blinked red about once per second. (In normal S3 it blinks every 4 seconds.) All freezes have similar symptoms.
S4, battery power, using the hibernate button in the logout dialog, and the power button to wake it. In Linux it boots normally for about 10 secs until it discovers the hibernation image, whereupon it diverts to restoring that image. It suspended successfully 30 times. (But what would happen on the 31st?) I watched the clock on another machine, and it took 7 secs to go down, and 21 secs to boot up, counting to when the screen locker was able to accept a password. These times get longer if you're using more RAM, e.g. if playing a game or a movie.
S3 again, as above. It suspended once, and froze on the second try.
S3, battery power, lid close and open, with the hack script that positively shuts down the non-boot CPUs and boots them up on resuming. This time it suspended and woke successfully 8 times, then froze on the ninth.
S3, line power, lid close and open, with the hack script. It suspended successfully 4 times and froze on the fifth.
S3, line power, lid close, running Windows. It suspended 30 times with no freezes. It took 7 secs to sleep and under 1 sec to wake. I did not test if S4 was reliable in Windows.
S3, half on line power and half battery, lid close, back to Linux. This was discovered about a month later. In this test I left the machine down or up for 5 minutes in each step of the test. It woke 30 times in succession with no errors.
Conclusion; there are two bugs, and S3 fails intermittently if the sleep or wake time is too short: about 10% probability of failure for 15 secs sleep time. But so far S4 has been completely reliable. Windows does not have this problem.
Until I re-tested S3 with the longer sleep time, I used hybrid sleep: it takes the time to write memory to disc, but then suspends to RAM. If it wakes, fine. If it doesn't, you kill it and reboot, which reloads the image on disc, same as for normal hibernation.
Web resources:
I'm using version 6.0 Hammerfest
. Licensed under GPLv3.
It is actually available on SuSE Build Service in the benchmark
sub-repo; called phoronix-test-suite.
General dependencies: PHP CLI (SuSE includes it in the main php5 package), and
standard development tools like gcc.
The SuSE package also requires requires xdg-utils, php5, and these php5-
packages: curl dom gd json openssl pcntl posix zip.
It's going to be a challenge to learn to use this package.
A lot of reviewers use the Phoronix Test Suite on the machines they review, and I had hoped to get tests on my own machines that can be compared with the published results. But unfortunately there are tons of tests in the suite, and everyone uses different tests. Nonetheless, running these benchmarks is valuable because I can pick a wide range of tests, particularly tests that it wouldn't be reasonable for me to develop on my own.
You also need a test configuration, and a standard one is available as phoronix-test-suite_data . If you install it, it will drag in phoronix-test-suite. It also requires glibc-devel-static . The package with all dependencies occupies 177Mb. However, this package has way too many tests and takes way too long for what I'm trying to do.
For historical reference, to make it run as a user (yourself), you need to create a symbolic link from ~/.phoronix-test-suite/test-suites/local/suse-basic-test-suite to /var/lib/phoronix-test-suite/test-suites/local/suse-basic-test-suite (directory). Then in the various commands you can refer to the SuSE test suite just as suse-basic-test-suite.
To execute the preset tests, just do /usr/bin/pts_launch_batch (no command line arguments). For interactive testing you will find in your desktop menu, System category, an entry for Phoronix Test Suite, which will launch /usr/bin/phoronix-test-suite in an xterm. It is able to install SuSE packages required for testing (it asks for the root password). It retrieves from the mother ship the source code for the tests you request.
I think I'm going to try to make this suite a little smaller. I made my own test directory called couchnet-bmk under ~/.phoronix-test-suite/test-suites/local, copied suite-definition.xml from the SuSE tests, and shortened it drastically. I also tried out the various tests on one machine, trying to find ones that would compile and execute in my context, and also to avoid tests that run forever. Here are the tests finally chosen:
I wanted these tests but they failed to install:
There are about 45 graphic tests available; I installed and tried quite a
number of them. About half are actual game demos similar to OpenArena; I
didn't try these since they don't give additional valuable information.
Among 7 non-game tests, 4 could be installed, but only one
subtest of one benchmark ran and gave a valid result. This was disappointing.
I tried two light entertainment
games; only one installed and ran:
Super Tux Kart. See the results section for the outcome.
The fastest machine finished the tests (including the game) in 16 minutes. The slowest took 70 minutes, including waiting for the game to make progress and finally killing it.
Here's the suggested procedure to run the test suite:
Pick your master host, to set it up on. First (as yourself)
execute:
phoronix-test-suite batch-setup
My choices: yes save, no web browser, no auto upload, do not
prompt for anything, run all test options. Beware: the hardware
characteristics of the machine are cached somewhere, and if you copy
~/.phoronix-test-suite to another machine, the cached values will be
wrongly reported. Don't believe them.
Then execute (Bourne shell syntax shown):
phoronix-test-suite batch-install $TEST 2>&1 | tee /tmp/pts.log
$TEST could be a suite definition such as couchnet-bmk, or the
name of an individual test inside
~/.phoronix-test-suite/installed-tests . For example if you give
pts/stream
it will match
~/.phoronix-test-suite/installed-tests/pts/stream-1.2.0/stream .
The OpenBenchmarking website gives names like pts/stream
.
In this step the program downloads files, e.g. source code and dungeon art, and compiles the test. If dependencies like devel packages are missing it will install them, for which it will ask you for the root password. On a SuSE system it uses zypper; on Debian it uses apt-get; on Ubuntu it uses yum.
You are allowed to batch-install the entire test suite, but I had to try each test one by one, so I didn't use that feature.
Then execute (Bourne shell syntax shown):
phoronix-test-suite batch-run $TEST 2>&1 | tee /tmp/pts.log
Generally you would specify your custom suite for $TEST.
When you run a suite including a game (OpenArena for me), it's
recommended to connect to the test machine by SSH, so you can kill
the game if it runs too long, and for that you need to steal the
console display. Do this as root:
ps agx | grep bin/X # Look for the file specified by the -auth option.
xauth -f /run/lightdm/root/:0 list
Select the line that is printed, something like:
aurora/unix:0 MIT-MAGIC-COOKIE-1 1234567890abcdef1234567890abcdef
Then in your non-root session for running tests paste the whole line:
xauth add aurora/unix:0 MIT-MAGIC-COOKIE-1 1234567890abcdef1234567890abcdef
export DISPLAY=:0 # Bourne shell syntax
phoronix-test-suite batch-run $TEST 2>&1 | tee /tmp/pts.log
Generally you would specify your custom suite for $TEST.
If you're using C-shell change to:
setenv DISPLAY :0 # C-shell syntax
phoronix-test-suite batch-run $TEST |& tee /tmp/pts.log
If you need to kill just the game, use ps
to find it, and
kill the bottom-most process mentioning openarena. You will need to
do this repeatedly because there are 3 trials and on some machines it
wants to test up to 3 resolutions. If you want to kill the entire
test run, press ctrl-C in the window running the test.
find the process titled Phoronix Test Suite
and
kill -INT $PID
; then use ps agxf
to find stragglers
like the still-running game demo.
Running this on a lot of hosts: First get all tests installed and working on one machine. Then do this on each other host.
Here are the test results. The hosts are sorted kind of by product
families. Mb/s
means million bytes per second, decimal. For tests
reporting in seconds, lower is better; if per second, higher is better.
When the SSD and the second memory stick were installed, the Acer was
retested, and where there was a difference a second number is shown.
kermit | piki | jacinth | aurora | diamond | iris | xena-vaio | xena-acer | |
---|---|---|---|---|---|---|---|---|
CPU | AMD E-350 | AMD Neo 6850e | AMD G-T40E | AMD G-T56N | Intel i7 3517UE | Intel i5 5250U | Intel i7 3632QM | Intel i5 5200U |
GHz | 1.6 | 1.8 | 1.0 | 1.65 | 1.7 | 1.6 | 2.2 | 2.2 |
Graphics | Radeon HD 6310 | Radeon HD 3200 | Radeon HD 6250 | Radeon HD 6320 | Intel HD 4000 | Intel HD 6000 | GeForce GT 640M | GeForce 940M |
AIO-Stress random write (Mb/s) | 14.7 | 26.2 | 17.0 | 30.5 | 65.2 | 2235 (!) | 133.0 | 283.9- 2196.24 |
OpenArena game (frame/s) | killed | 8.8 | killed | killed | 5.4 (kill) | 5.5 (kill) | 51 | 34-55 |
Stream memory perf (Mb/s) | 2558 | 3953 | 2447 | 3143 | 14360 | 13262 | 11900 | 8238- 14021 |
Loopback TCP net perf (sec) | 123 | 68 | 120 | 144 | 20.3 | 19.3 | 18.4 | 19.0 |
Himeno Poisson eqn (Mflop/s) | 206 | 233 | 167 | 187 | 1153 | 1374 | 1354 | 1323 |
FLAC encoding (sec) | 70 | 40 | 90.5 | 73 | 10.8 | 10.1 | 9.2 | 10.1 |
OpenSSL RSA 4096bit (sign/s) | 40 | 102 | 36 | 33 | 160 | 204 | 204 | 229 |
Total run time (minutes) | 71 | 41 | 29 | 70 | 44 | 29 | 16 | 16 |
Jimc's SHA-512 and I/O (Mb/s) | 23.0 | 41.9 | 26.2 | 26.0 | 72.6 | 81.5 | 115.4 | 81.3 |
Power at idle (watts) | 14 | 29 | 13 | 19 | 14 | 9 | 30 | 15 |
These patterns can be seen in the results:
The machines clearly fall into two groups, on CPU power. The machines with AMD CPUs are older and have substantially less CPU power than the Intel CPUs. Within each group, particular machines are not dramatically better or worse (except gaming). This result should not reflect on the manufacturers, but rather on the age of the CPUs.
The Vaio has a decent gaming GPU (it was bought to be a gaming machine). The Acer is also using a GPU that's credible for gaming (the Intel HD 5500), but it would be a lot more powerful if it could use the nVidia GeForce 940M (e.g. in Windows). The others range from sluggish to pathetic. All of the machines are adequate for software development and document preparation, my main use cases.
The Intel chipsets have much higher memory bandwidth, by a factor of 4 at least. Similarly, the Intel chipsets' floating point throughput is a factor of 6 better. And their integer performance (OpenSSL) is better but not as dramatically.
Comparing two versus one memory stick, the Stream test is 1.7x faster. Most tasks are not speeded up by the second memory stick, but I was surprised to find that OpenArena sped up almost exactly 1.5x. A possible reason is that the Intel GPU puts video RAM in main memory, causing a heavy load when image segments are copied from one place to another. But the nVidia GeForce 940M (if used) has its own 2Gb of video RAM. I think if you're going to be gaming or doing similar heavy graphics you need two memory sticks. I doubt that the actual amount of RAM made a difference.
The AIO test depends a lot more on the disc than on the CPU.
On the Acer machine (with one memory stick) I investigated OpenArena in detail. If you run it individually, rather than as part of a suite, it will go through a range of resolutions, with these outcomes:
Screen px | Frames/sec | Pixels/sec |
---|---|---|
800x600 | 88.4 | 4.24e7 |
1024x768 | 54.4 | 4.28e7 |
1280x960 | 35.9 | 4.41e7 |
1280x1024 | 34.2 | 4.49e7 |
1400x1050 | 30.1 | 4.42e7 |
1920x1080 | 19.2 | 3.98e7 |
So its frame rate is inversely proportional to the number of pixels per frame, with ±6% scatter. In the suite test it did not report the resolution it was using, but comparing with these results and eyeballing the screen during the test, the closest match is to 1280x1024px, and possibly the same resolution was used on the other machines.
By looking at /proc/$PID/fd and locating the graphic device (/dev/dri/card0) in the /sys filesystem, I was able to confirm that the game was using the Intel HD Graphics 5500 controller, which is integrated on the CPU chip, rather than the nVidia GeForce 940M (GM108M GPU). Hiss, boo! Likely the demo would run even faster with the nVidia GPU.
Wanting more coverage in the graphics category, I tried a lot of tests,
excluding real
games like OpenArena which I already had information from,
but only one installed and executed successfully: Super Tux Kart. Here are
the frames per second and pixels per second achieved. The Acer had two
memory sticks.
Host | Screen px | Frame/sec | Pixel/sec | |
---|---|---|---|---|
Piki | 800x600 | 1.5 | 7.2e5 | |
Vaio | 800x600 | 40 | 1.9e6 | |
Vaio | 1920x1080 | 19 | 3.9e7 | |
Acer | 800x600 | 42 | 2.0e7 | |
Acer | 1920x1080 | 19 | 3.9e7 |
I tested it first on Piki, where at 800x600px resolution it got about 1.5 frames/sec (compared to 8.8 for OpenArena); I killed it. It was more successful on the Vaio and Acer (using Intel HD 5500 graphics), which performed very similarly to each other, and roughly similarly to what they did (at 1920x1080px) on OpenArena. However, OpenArena could deliver about twice the (already high) frame rate at 800x600px, perhaps because the artwork was more complex for Super Tux Kart.
Certainly these particular games are playable on this machine, but it's hard to make a judgment whether all (Linux) games would be similarly playable. I think the achieved frame rates can be used to recognize which machines are pathetic in their gaming performance, but a functional judgment about gaming on the Acer machine (in Linux) cannot really be made from this limited test data.
I really would like to know how fast my GPU is, and how much of a
power pig it is. If you do lspci
the line item for the nVidia card will
include the GPU model in the device title: GM108M. The open source
nouveau
X-Windows driver (SuSE package name: xf86-video-nouveau)
recognizes up to GPU GM107 but does not know about the GM108M, code name
Maxwell
. The recommendation in this case is to install the nVidia
proprietary driver, if your security or political rules permit that.
There are several nVidia drivers depending on the GPU family; the one for
this machine is [believed to be] x11-video-nvidiaG04 and dependencies will have
a suffix of G04. Look at the
SuSE guide to installing nVidia drivers and follow the link to the
procedure to add the nVidia sub-repo. They suggest using zypper
install-new-recommends
(abbreviation: inr) but that is not going to fly in
my context since I have suppressed a lot of recommended packages that are
either useless or make trouble for me. I just did zypper install
x11-video-nvidiaG04-361.28
, picking the latest version from among the
(unstable) packages available in the repo. As part of the installation it
drags in nvidia-gfxG04-kmp-default and compiles the enormous kernel module for
the standard kernel. This is driver version 361.28 dated 2016-02-03.
However, I'm using kernel 4.4.0 from Tumbleweed, to get support for the
Qualcomm/Atheros QCA9377 Wi-fi NIC. After abortive and fruitless attempts to
avoid version skew I just booted the stock kernel (see below under Backport
for the dramatic soap opera surrounding Wi-fi). Even so, the kernel module
compiled itself for kernel 4.1.12 which had already been superceded by 4.1.15
for a security patch. I retrieved the modules and moved them to be accessible
from 4.1.15, did depmod
, then mkinitrd
.
The kernel
module was loaded as was the nvidia X-Windows module, which reported that it
is for all Supported NVIDIA GPUs
. Evidently the GM108M is not a
supported GPU: the X-Windows module was unloaded and we got the Intel driver.
I forced use of the nvidia driver with an explicit
device and screen section in a file in /etc/X11/xorg.conf.d, but it reported
that it could not find any relevant devices, and exited. That was very
disappointing, and that's where I left it. Reverting to kernel 4.4.0 and the
Intel driver.
There are several reasons that I would like to use my distro's stock kernel which is 4.1.12 at the moment.
featureof kernel 4.4.0 runs the load average up to 1 or 2 all the time, with no actual CPU usage (turns out to not be 4.4.0's fault).
Therefore I investigated the backported ath10k stack of drivers. To cut to the conclusion, the latest backport set is from kernel 4.2.6, which does not yet support the Qualcomm-Atheros QCA9377 NIC (PCI id 168c:0042). It does have the 168c:003c and 168c:003e. Look in ./backports-4.2.6-1/drivers/net/wireless/ath/ath10k/pci.c .
Also in kernel 4.1.12 the touchpad was inoperative. Losing the touchpad is not acceptable. Reprieve: if you set it to Basic mode in BIOS, it works in 4.1.x including acceleration and scroll gestures.
The backports documentation tells how to extract a backports package from any kernel version. I'm going to try to do this procedure on a 4.4.x variant.
defaultwith its name.
rpm -q kernel-default(or whatever variant you use). It will give the numeric version, e.g.
kernel-default-4.1.15-8.1.x86_64. Look in /lib/modules for a matching directory, in my case 4.1.15-8-default . So this writeup can be simpler let's set a variable: MODS=/lib/modules/4.1.15-8-default . The build procedure relies on a link in that directory called
sourceto the correct kernel source, which should have been put there when the kernel-default-devel package was installed. Make sure you got it.
make install) as myself, not root, so make sure the ordinary user has write permission on the working directory.
custom ath10k backports.
cdto ./backports .
cdto one of the git trees and do
git tag -l. Targeting backports-20160122 (most recent) and next-20160122 which may or may not correspond to v4.4.2.
cdto the respective git trees and check out (git checkout $tagname) these two revisions. The tagname in next (here next-20160122) is [believed to be] the appropriate argument to --git-revision in ./gentree.py .
depmod 4.1.15-8-default(use your actual kernel version). Actually I'm going to move the update directory to /lib/modules and make a symlink to it within $MODS, so when there's a new kernel I can just make that symlink in the new module directory, until the kernel gets changes that invalidate the backported driver.
Well, not so fast. Once I rebuilt the initrd, it stopped working. When it was setting the regulatory domain it recognized that it was in the US, but did not come up with any frequency ranges, and things went downhill from there. I tried hiding the backported cfg80211.ko and mac80211.ko and rebuilding the initrd; the stock drivers were used but it didn't help the symptoms. That's where it stands at the moment.
For historical reference here is the procedure to compile a preassembled backport tree.
depmod 4.1.12-1-default.
backportin a notice that the backported drivers were loaded. Oops, neither of these happened. I did
modprobe -v ath10k_pci; it and its dependent modules loaded with no complaints.
Selection | Testing | Setup | Hacking | Top |