on the possibility: 2025, as heard by Joshua.
X1Plus Expander: calling my shots
One of my original goals with the X1Plus Expander program was to try to be as transparent as I could about it -- all the work that I did, where the money went, and where the time went. My best guess was that I wasn't going to make much money from it, and definitely not enough to try to hold secrets about it -- I expected that the most important outcome for me would be educational. And, if I documented the project, it might also serve as good reference material for other people trying to do a similar thing!
I set these as my goals -- make more than zero money; deliver a product; and write about it -- but along the way, I made plenty of predictions just for myself. So here's the question: how many of these goals did I succeed in? And, if I made predictions, did I get anywhere close to calling my shots?
Dolla dolla bill, y'all¶
The not-so-secret here is that I'm really bad at accounting -- mostly because it's a task that I don't have good methodologies around, and also, it's a task that I hate doing, so I keep not doing it. But accounting is like cleaning a toilet -- if you keep up with it regularly, it's not so bad, and if you keep putting it off, it is a disgusting mess.
My books for this project are a disgusting mess.
Top-down cost¶
What we did is that every time I, Dan, or Accelerated Tech spent money, I put it in a Google Sheets with a description of the expense, the date, the dollar amount, and who paid. The good news, I suppose, is that there were few enough expenses that I could categorize it later by hand, and totalize it with a script.
Since I was working on this project as Accelerated Tech, I set up X1Plus Expander in my time-tracking system, TSheets. I "billed time" to Expander when I used hours that I otherwise would have spent doing paying client work. What that meant was if I spent an evening working on Expander firmware because I wanted to, it didn't count -- that was home time. But if I spent an office day forcing myself to test boards, that was billable time, because I sure wasn't going to spend that evening doing client work instead! To put a dollar value on it, I took the number of hours billed to the program, and multiplied by my current open source "friends rate".
Once I categorized everything, I got to do the fun part -- some data
visualization! Because I was
fully budgeting based on vibes, I didn't already know where all the
money went on this project, and this was the first time I got to find out.
I threw it into d3.js, and out came the following Sankey chart.
Hey, this is not a bad outcome at all! I had a few hunches, and this validates them:
- Crowd Supply told me not to price prototype costs in, but I was pretty sure it was going to cover them eventually -- even though the prototype costs felt pretty big at the time (we spent $500 at a time on validation runs of the mainboard). It did; we spent $5,348 on prototype equipment, or about 9% of the whole program budget. I guess that was fine.
- I thought that the COGS was going to be a relatively small percentage of where the money went. That's about right: of the $76,377 that came in, the landed cost of goods at Mouser was $24,751, or 32%.
- I thought that I was going to be net profitable, if you didn't count my time at all. I was -- there was positive profit in there!
-
I wasn't sure where we were going to land on Accelerated Tech cost (that is to say, including time I billed to work), but I had a hunch it was going to be about a wash. It turns out that I spent slightly more time than Accelerated Tech made in profit on this whole program.
If you include the time I spent from work, Accelerated Tech came out in the red by... $711, on a $61,144 program (or, I went overbudget on time by 1.2%). I'd say that's as close as it gets to hitting the nail on the head, especially for my first try at one of these.
Of course, if you consider my and Dan's home time -- conservatively, we spent close to 1500 hours hacking on this? -- well... let's just say the situation becomes a whole lot more grim.
Good thing we agreed not to count that.
Cost of goods sold¶
There's another view -- how did the unit economics for the project compare to what I predicted? I have data on this, too! Roughly:
- When I submitted the project to Crowd Supply in July 2024, I expected that my cost of goods sold, landed in Mountain View, was going to be $47.08/unit, in quantity 200, including extras for yield fallout. (I think I got quotes from JLC for this, and used some hand-waving math to guess everything else.)
- Later, in September 2024, they gave me an actual spreadsheet to work through! I got quotes from a bunch of vendors that time -- I had PCBWay quote me in qty200, I got JLC to autoquote me, and I also got quotes from my final selected vendor, King Credie, in quantities 150, 300, and 500. In quantity 500, I predicted that my landed cost was going to be $29.50/unit, including fallout.
- When all was said and done, I made and sold 567 units, and the total cost by the time it arrived on my front door (including extras) was $15,695.29, or $27.68/unit.
I broke this down graphically by where the specific costs went.
A few things to note:
- I had expected tariffs to be much higher. In reality, I got lucky in three ways. For one, the HTS codes that my products were classified as more towards the 10% side than the 25% side, at the time. For another, many of my shipments were relatively small pieces, and therefore fell under the de minimis threshold -- and so therefore, the tariff load was $0 for those shipments! And third, and most importantly, the shipments (mostly) arrived before the Liberation Day tariffs, which would have easily doubled my COGS. Sheesh.
- I explicitly multiplied out for yield fallout in my preproduction estimates, but when I actually ordered parts, I did this by ordering about 10% more. But, since that was part of the order, I couldn't really split it out of the unit cost -- so it gets folded into each individual piece in the final tally.
- This doesn't include production samples, which I hadn't budgeted for at the beginning; that amortizes to about an extra $1.98/unit. But, on the other hand, if I were going to order another run, the marginal cost of more units would not include this, so I didn't include it here.
All told, I'm also pretty impressed by this! Coming in below my estimate is surprising, especially given that the scope crept a fair bit to add nicer boxes, screws, a document, labels, etcetera; I ended up coming in about 6% under budget, which is pretty darn good. Obviously I got lucky on tariffs, but even if I hadn't, and I had paid 25% on everything that I brought into the US, I would have come in only 9% high.
Schedule¶
I was pretty happy with what I achieved in terms of cost. On schedule, things were... well, a little less rosy. The executive summary is that, over the course of a 13 month program, the delivery dates pushed by about two and a half months. This is not great, but maybe by the standards of crowdfunding projects, it's not the end of the world...
I was going to try to put together a graphical timeline for this, but gosh, this post has taken long enough to write at this point, and I couldn't find a good library to generate one. So, instead, here's a rough chronological history of Expander.
- July 11, 2024: I submitted the project to Crowd Supply. (More on that in some future post, maybe.)
- August 8, 2024: Hey, mail from Josh at Crowd Supply -- we had an agreement signed!
- August 30, 2024: The project pre-launched on Crowd Supply.
- September 19, 2024: In some internal communication to Crowd Supply, I said that for a fund-to-shipment time, an "8 week turn time seems believable ... the initial shipment could be 200 units, with 200 units per week thereafter". Man, that was optimistic!
- October 25, 2024: This one isn't exactly schedule-related, but I commented that "re: the other accessories: I expect the take rate to be relatively low". It was not! I made 567 Expanders, 642 modules, and 315 cases. That's an accessory take rate of, uh... 169%. Well then.
- November 2, 2024: During Supercon, we kicked off crowdfunding! I gave my first public commitment: I said that I'd deliver by April 12, 2025.
- December 13, 2024: Crowdfunding was finished.
- December 16, 2024: I had a signed purchase order in hand from Crowd Supply! (For tax reasons, I had them wait to send me the money until January 1st, but I started executing.)
- January 14, 2025: The first production samples arrived in hand.
- January 23, 2025: In internal communication to Crowd Supply, I optimistically said that "X1P-100 is trending on time". Then I went on vacation to go skiing in Japan for a few weeks.
- March 5, 2025: Three gigantic boxes of mainboard PCBAs arrive on my doorstep. I started questioning my life choices.
- April 10, 2025: PCBs have definitely not left my house. My life choices continue to come into question. I send e-mail saying that I'm "hoping to get the first shipment in the mail to Mouser by the end of April". My (fantastic!) Crowd Supply project manager, Chris, having played this game before, sets the revised ship date on the web site to May 31st, rather than April 30th.
- May 6, 2025: The first shipment goes out to Mouser! It's 80 sets of X1P-800 plastics.
- May 8, 2025: Emboldened by my success, I put another box in the mail, this time some module boards. I figure that I wanted to make sure that there was a good stock of add-ons, so that people who ordered more stuff and gave me more money didn't have to wait longer than people who gave me less money.
- May 22, 2025: The first 48 X1P-100 bundles go in the mail to Mouser! Technically, a non-zero number of X1P-100s have arrived in customer hands by the promised May 31st. We update the site to say "orders placed now ship June 30th".
- June 13, 2025: Enough X1P-100 bundles go in the mail to fulfill all crowdfunding orders (but not preorders placed between December and June!).
- July 30, 2025: We update the box to say "Orders placed now ship August 1st".
- August 16, 2025: Enough X1P-100 bundles go in the mail to fulfill all outstanding orders!
- August 26, 2025: The remaining few bundles and cases go in the mail, finally fulfilling all my obligations to Mouser / Crowd Supply. At long last, I take a deep breath.
I think this outcome is not horrible. Does it seem like something I could improve on? Yeah, it is. But the biggest thing about this is that people were always very happy to receive their packages! And buyers were really quite gracious when I sent e-mails announcing delays. (It is almost like they have bought crowdfunded projects before.) So, you know, all's well that ends well.
Other thoughts¶
Back when I was at NVIDIA, one of the things that I always found interesting is that we tried pretty hard to call our shots. Obviously we wanted silicon to have the best performance and power consumption and all that, and obviously if we missed our specs, that was bad and some manager was probably going to get a bit of a dressing down for that.
But the other direction was about as important! Not only was it important to have good results, but it was important that we were able to accurately predict what we were going to achieve. If we had dramatically better results than we had planned, Jensen was also going to get on someone's case for that, because that was also a bug in our process.
I took that lesson to heart. I knew going into this program that I was not going to make a huge amount of money. That was something I accepted as the cost of getting to learn how to do this. But I did try to be at least reasonably methodical in my estimation of what things would cost -- and if I had to make wild-assed guesses, at least they ought be scientific wild-assed guesses. There is definitely a satisfaction to not just delivering the thing, but approximately doing it when and how I predicted.
At the start, I had claimed to myself that my goals were:
- Make an amount of money greater than or equal to $0
- Deliver roughly what I had promised, roughly when I promised
- Try to write about it and set an example for other people to either follow or avoid
The execution on this project was far from perfect (and that'll be the subject of my last post on the matter, coming next!) ... but through the lens of the above goals, it seems like I might not have done half bad at all.
Artifacts That Tell A Story
The Unreasonable Effectiveness of the Fourier Transform
Characterizing my first attempt at copper-only passives
Last year, I kind of got a bee in my bonnet about trying to see if I could accurately (?) make small RF passives out of copper, rather than buying 2pF NP0 capacitors or something, as part of a long-on-the-horizon project to make extremely inexpensive GHz-class oscilloscope probes. I figured that the right place to start was to fab out a board on JLCPCB's JLC04161H-3313 stackup with a handful of calibration standards, and some of the passives that I wanted to measure. Then, I took it over to Tom Verbeure's house, and we had a shootout trying to measure these passives with our varying test equipment -- he and his HP 8753C, and me and my NanoVNA.
Getting an accurate measurement of the DUT¶
Previous results¶
The results were not great, but we didn't really know that yet. Now that I just got a new Siglent SVA1032X, though (and a LibreCAL eCal!), I figured that I'd better revisit this. When we last left our heroes, I had two versions of the board -- one finished with HASL, and one finished with ENIG. And we had already discovered that some of the standards and DUTs were designed incorrectly, so we had made some changes to the boards with a knife. Here's what the board looked like most recently.
Let's focus on measurements taken of the HASL version, and of the 2pF DUT.
BOARD = "hasl"
DUT = "2pf"
I decided to try to figure out SciKit-RF for this application.
import matplotlib.pyplot as plt
import skrf
from skrf.media import DefinedGammaZ0, Coaxial
from skrf.calibration import OnePort
skrf.stylely()
%matplotlib inline
%config InlineBackend.figure_format='retina'
# This is the frequency space of my SVA1032X's measurements.
freq = skrf.Frequency(0.1, 3200, 1001, "MHz")
ideal_medium = DefinedGammaZ0(frequency = freq, z0 = 50)
First off, let's plot something that we saw last time. Last time we did this, we used the ports on the board above as calibration standards. We'll plot a Smith chart of the S11 of the 2pF DUT, as measured that way. Hopefully, we will see a capacitor.
dut_hp8753c = skrf.Network(f'../{BOARD}-{DUT}-hp8753c.s1p')
dut_nanovna = skrf.Network(f'../{BOARD}-{DUT}-nanovna.s1p')
print(dut_hp8753c)
print(dut_nanovna)
dut_hp8753c.plot_s_smith()
dut_nanovna.plot_s_smith()
This is where we left off last time. The 8753C and NanoVNA are broadly in agreement, and it looks capacitive, sort of, I guess, inasmuch as it rotates counterclockwise from an open. But it doesn't seem right. Notably, there is a S11 of greater than 0dB -- the S-parameters say that we get more energy back than we put in!
New measurements on the SVA1032X¶
When I returned to this a year later, I measured with a different technique on my SVA1032X. I defined my calibration plane to be the edge of the connectors, rather than trying to define it based on the edge of the DUT, which clearly was not working for some reason. Here's what I measured with the calibration plane being the connector, for the DUT.
dut_meas = skrf.Network(f'{BOARD}-{DUT}.s1p')
dut_meas.plot_s_smith()
This makes a little more sense: the Smith chart is rotating around, which is an indication that there is a capacitor there, but there is a transmission line in series with it. And it spirals inwards, which is an indication that there is some loss between our DUT and our calibration plane.
Reproducing the old results with the new data¶
I also measured the S11 response of my open, short, and load "standards" at my new (arguably, more consistent and correct...) calibration plane. Theoretically, I should be able to "calibrate" the above response to those in software, and it should look like what we measured a year ago. Let's calibrate the new data to that, and then plot that and compare it against the previous result.
cal_boardstd = OnePort(
ideals = [ ideal_medium.open(), ideal_medium.short(), ideal_medium.match() ],
measured = [
skrf.Network(f"{BOARD}-open.s1p"),
skrf.Network(f"{BOARD}-short.s1p"),
skrf.Network(f"{BOARD}-load.s1p"),
])
cal_boardstd.run()
dut_boardstd = cal_boardstd.apply_cal(dut_meas)
dut_boardstd.name = f"{BOARD}-{DUT}, calibrated to board standard"
dut_boardstd.plot_s_smith()
dut_hp8753c.plot_s_smith()
OK, this proves that I have at least enough of an understanding to replicate the incorrect thing that I did before using the Scikit-RF tools, rather than the on-device calibrations! I guess if the desired behavior is to de-embed the transmission lines in the DUT, this, uh, re-embeds them, or something. Anyway, the plots look consistent.
Calibrating for port extensions¶
With my new VNA, I found a cool button on it labeled "port extensions". My understanding is that it's good for when you're lazy and want to deembed a known length of ideal transmission line that you have attached to the end of your already-calibrated S11 measurement setup. The results are not going to be perfect, because the transmission line you actually attached is not ideal (it has loss and its own reflections) -- but it does mean that you can get closer to measuring something that's on the other end.
When I tried this on the SVA1032X, I got a much nicer looking capacitive Smith chart! Could I replicate this "port extension" behavior with the raw data in Scikit-RF? My plan was that I would try to apply a calibration as if I had measured ideal standards a transmission line away. That is to say, I was effectively relocating my calibration plane to be one piece of coax longer than it was originally measured to be.
DIST = 48 #mm
lossy_medium = Coaxial(frequency=freq, z0_override=50, tan_delta = 0.03) # crummy FR4 board
cal_exten = OnePort(
ideals = [ ideal_medium.open(), ideal_medium.short(), ideal_medium.match() ],
measured = [
lossy_medium.delay_open(DIST * 1000 / 0.66, 'um'),
lossy_medium.delay_short(DIST * 1000 / 0.66, 'um'),
ideal_medium.match(), # a distance-shifted match is still a match
]
)
cal_exten.run()
dut_exten = cal_exten.apply_cal(dut_meas)
dut_exten.name = f"{BOARD}-{DUT}, calibrated to {DIST} mm extension"
dut_exten.plot_s_smith()
These results looked a lot more like a capacitor! And, importantly, they were not physically impossible.
One thing I realized as I was looking through the scikit-rf APIs is that there was an easier way to express what I meant: they have an easy mechanism to modify a network by cascading it with another. Since S-parameters are linear time-invariant functions, that also means that all S-parameters must have an inverse -- that is to say, if I have a transmission line on my desk that cascades certain S-parameters onto the front of some other network, then there also must be a set of S-parameters that I can cascade on the front that perfectly cancel out that transmission line's S-parameters. (If you find that physical representation of an anti-transmission line, though, please let me know -- I'd like to have a few on my desk.)
In this case, I could take my dut_meas network, and cascade the inverse of my theorized perfect extension on the front, which they refer to as "de-embedding". So I would take a model of this extension, invert it, and cascade it, and the Smith chart of the output ought be the same:
def mk_dist_mm(dist):
return lossy_medium.line(dist * 1000 / 0.66, 'um')
dut_exten_2 = mk_dist_mm(DIST).inv ** dut_meas
dut_exten_2.name = f"{BOARD}-{DUT}, de-embedding a {DIST} mm extension"
dut_exten_2.plot_s_smith()
And indeed, it is. That's a good trick to have in my back pocket.
Deembedding the launch with poor quality standards¶
This looks pretty good, but one problem is that it includes the connector and the transmission line on the board. We know that the amplitude response of our measurement trails off, but we don't know whether the amplitude response of the capacitor trails off, or whether that's an artifact of the launch. Previously, we tried using the open / short / load standards on the board to deembed the connector launch and transmission line, but those gave very bizarre results.
When I played around with the port extension feature on the VNA, it became obvious why. Even very small changes in length of the extension had a very large change in the Smith chart up in the 3GHz range. Looking at the board, the standards on the board do not have a consistent length! Could this cause the extremely strange looking S11 plot that I was seeing last year?
With my new knowledge of calibrating things out, I realized that I could potentially "fix" the calibration standards. I knew that the calibration standards were probably fine, except for the fact that they were unusual lengths. If I included the lengths in the definition of the calibration standard, then I could probably move the calibration plane (or, well, an approximation thereof) right up to the edge of my DUT.
Looking at the board, it looks like the place where we cut the open is around 1.5-2 mm longer than the edge of the capacitor; the load is similarly far away; and the short is 2.5-2.8mm or so too... short. If I cascade these lines (or these anti-lines) with the definition of my calibration ideals, I should be able to calibrate these out. Let's try it:
cal_boardstd_ext = OnePort(
ideals = [
mk_dist_mm(1.6) ** ideal_medium.open(),
mk_dist_mm(2.7).inv ** ideal_medium.short(),
mk_dist_mm(1.6) ** ideal_medium.match() # which is the same as a match unmodified, but...
], measured = [
skrf.Network(f"{BOARD}-open.s1p"),
skrf.Network(f"{BOARD}-short.s1p"),
skrf.Network(f"{BOARD}-load.s1p"),
])
cal_boardstd_ext.run()
dut_boardstd_ext = cal_boardstd_ext.apply_cal(dut_meas)
dut_boardstd_ext.write_touchstone(f"{BOARD}-{DUT}-boardstd_ext")
dut_boardstd_ext.name = f"{BOARD}-{DUT}, calibrated to board standards w/ extensions"
dut_boardstd_ext.plot_s_smith()
Now that, at last, looks like a capacitor.
How capacitor is capacitor?¶
One of the original goals of this experiment was to know whether I had made a capacitor -- and, if so, how good of a capacitor, and what the value of the capacitor was. A year ago, we just sampled a handful of points on the curve and asked the VNA tool what it thought the capacitance at each point was, and I think we took this as gospel and just chose a point that seemed right.
How it does this underneath is by computing the S-parameters to $latex Z$-parameters, and converting the complex impedance to a combination of a real impedance and either an inductor or capacitor. We can do this because a capacitor's impedance, $latex Z_C$ at a given frequency $latex \omega$ is defined as:
$$Z_{C} = - \frac{j}{\omega C}$$
This means that at every frequency, a capacitor's imaginary impedance ought always be negative, and its real impedance ought always be zero -- if it's a good capacitor, that is.
Given that we have this all numerically represented in... a computer... we can convert the S-parameters to $latex Z$-parameters, and plot them, hopefully. When we tried to do this a year ago, we had some extremely surprising data. (Since these plots are logarithmic, dotted regions indicate negative values, which could not otherwise be represented.)
import math
import numpy as np
def plot_re_mag(ax, dut_plot):
color = next(ax._get_lines.prop_cycler)['color']
ax.plot(dut_plot.f, np.where(dut_plot.z_re[:, 0, 0] < 0, np.nan, abs(dut_plot.z_re[:,0,0])), color=color, label=dut_plot.name)
ax.plot(dut_plot.f, np.where(dut_plot.z_re[:, 0, 0] > 0, np.nan, abs(dut_plot.z_re[:,0,0])), ':', color=color)
ax.set_ylim(0.01,100)
ax.set_yscale("log")
ax.set_title("Magnitude of real component of impedance")
ax.yaxis.set_major_formatter(lambda y,pos: f"{10 * math.log10(y):.0f} dB ohm")
ax.legend(loc="upper right", fontsize=6)
ax.xaxis.set_major_formatter(lambda y,pos: f"{y/1e9:.1f}\nGHz")
def plot_im_mag(ax, dut_plot):
color = next(ax._get_lines.prop_cycler)['color']
ax.plot(dut_plot.f, np.where(dut_plot.z_im[:, 0, 0] < 0, np.nan, abs(dut_plot.z_re[:,0,0])), color=color, label=dut_plot.name)
ax.plot(dut_plot.f, np.where(dut_plot.z_im[:, 0, 0] > 0, np.nan, abs(dut_plot.z_re[:,0,0])), ':', color=color)
ax.set_ylim(0.1,10000)
ax.set_yscale("log")
ax.set_title("Magnitude of imag component of impedance")
ax.yaxis.set_major_formatter(lambda y,pos: f"{10 * math.log10(y):.0f} dB ohm")
ax.legend(loc="upper right", fontsize=6)
ax.xaxis.set_major_formatter(lambda y,pos: f"{y/1e9:.1f}\nGHz")
fig,(ax0, ax1) = plt.subplots(2,1, sharex = True, figsize=(7,6))
plot_re_mag(ax0, dut_boardstd)
plot_im_mag(ax1, dut_boardstd)
Measuring capacitance, instead of magnitude¶
This is a little bit hard to understand, looking just at the imaginary component of impedance, since really it needs to be normalized to frequency. The real component, though, we definitely see something strange here: it ought be extremely close to 0, and it definitely is not. That is to say, it should be well in the -many dB ohm range. What if we replot the imaginary component as the effective capacitance, though? We know that
$$Z_{C} = -j \frac{1}{\omega * C}$$
So we should be able to rephrase this as:
$$C = - \frac{1}{\textrm{Im}(Z_{C})*2*\pi*f}$$
We're hoping this is going to be around 2 pF. So let's plot around the picofarad scale.
def plot_cap(ax, dut_plot):
ax.plot(dut_plot.f, -1/(dut_plot.z_im[:,0,0] * 2*math.pi*dut_plot.f), label=dut_plot.name)
ax.set_ylim(-2e-12, 8e-12)
ax.set_title("Capacitance")
ax.yaxis.set_major_formatter(lambda y,pos: f"{y/1e-12:.0f} pF")
ax.xaxis.set_major_formatter(lambda y,pos: f"{y/1e9:.1f}\nGHz")
ax.legend(loc="upper right", fontsize=6)
fig,ax = plt.subplots(1,1, sharex = True, figsize=(7,4))
plot_cap(ax, dut_boardstd)
At long last, measuring a capacitor (I hope)¶
This is close! But it goes completely wrong at high frequency -- in fact, into negative capacitance. That can't be right. Now let's take a look at the version with our calibration recovery tactics.
fig,(ax0, ax1, ax2) = plt.subplots(3,1, sharex = True, figsize=(7,9))
plot_re_mag(ax0, dut_boardstd_ext)
plot_re_mag(ax0, dut_exten)
plot_im_mag(ax1, dut_boardstd_ext)
plot_im_mag(ax1, dut_exten)
plot_cap(ax2, dut_boardstd_ext)
plot_cap(ax2, dut_exten)
Both traces look prety plausible. It's hard to tell which one is actually "right" -- since FR4's $latex D_K$ reduces with frequency, we should expect that the capacitance should also lower with frequency, which the simple extension calibration (red trace) shows, and the extended board standard calibration (blue trace) shows the opposite of.
On the other hand, we should expect there to be more error of some kind that I can't intuitively seem to predict in the red trace than the blue trace, because it would not compensate for the fact that the characteristic impedance of the transmission line would also change with frequency, whereas the blue trace should compensate for that.
But for now, I think I've met my objective a fair bit more closely than I did before. This looks a lot more like a capacitor, both on the Smith chart and by looking at the $latex Z$-parameters. I think to really be able to characterize this capacitor, I would need to do a better job of designing my test fixture. But, more to the point, now I have a much better understanding of what I'm actually doing when I calibrate, and what kinds of questions it's even possible to ask of a set of S-parameters.
J. Wise, June 2025
dut_4pf = skrf.Network(f'{BOARD}-4pf.s1p')
dut_4pf_exten = cal_exten .apply_cal(dut_4pf)
dut_4pf_boardstd_ext = cal_boardstd_ext.apply_cal(dut_4pf)
dut_4pf_exten.name += f", calibrated to {DIST} mm extension"
dut_4pf_boardstd_ext.name += ", calibrated to board standards w/ extensions"
fig,ax = plt.subplots(1,1, sharex = True, figsize=(7,4))
plot_cap(ax, dut_4pf_boardstd_ext)
plot_cap(ax, dut_4pf_exten)
Interestingly, the dip effect of the extension is even greater in this test, while the rise from the calibrated-to-board-standards version is about constant in terms of total capacitance. Why is that?
6nH inductor¶
This one is pretty similar, except, of course, for the fact that inductance is positive. The impedance of an inductor is represented as:
$$Z_L = j\omega{}L$$
So we can hopefully calculate the inductance with:
$$L = \frac{\textrm{Im}(Z_{L})}{2*\pi*f}$$
For some reason, I moved the inductor about 8mm south. Why did I do that? I have no idea, but now we have to compensate for it.
def plot_ind(ax, dut_plot):
ax.plot(dut_plot.f, dut_plot.z_im[:,0,0]/(2*math.pi*dut_plot.f), label=dut_plot.name)
ax.set_ylim(0, 12e-9)
ax.set_title("Inductance")
ax.yaxis.set_major_formatter(lambda y,pos: f"{y/1e-9:.0f} nH")
ax.xaxis.set_major_formatter(lambda y,pos: f"{y/1e9:.1f}\nGHz")
ax.legend(loc="upper right", fontsize=6)
dut_6nh = skrf.Network(f'{BOARD}-6nh.s1p')
dut_6nh_exten = cal_exten .apply_cal(dut_6nh)
dut_6nh_boardstd_ext = cal_boardstd_ext.apply_cal(dut_6nh)
dut_6nh_exten.name += f", calibrated to {DIST} mm extension"
dut_6nh_boardstd_ext.name += ", calibrated to board standards w/ extensions"
dut_6nh_exten_moved = mk_dist_mm(7) ** dut_6nh_exten
dut_6nh_exten_moved.name = dut_6nh_exten.name
dut_6nh_boardstd_ext_moved = mk_dist_mm(7) ** dut_6nh_boardstd_ext
dut_6nh_boardstd_ext_moved.name = dut_6nh_boardstd_ext.name
fig,(ax0,ax1) = plt.subplots(2,1, sharex = True, figsize=(7,8))
plot_re_mag(ax0, dut_6nh_boardstd_ext_moved)
plot_re_mag(ax0, dut_6nh_exten_moved)
plot_ind(ax1, dut_6nh_boardstd_ext_moved)
plot_ind(ax1, dut_6nh_exten_moved)
It is not super duper flat, and basically the best argument that this provides is that I ought redesign the board with better standards, and measure. But it is pretty close to the rough back-of-envelope calculations from when I designed it, so I think I'm OK with that!