Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lightning Specification Meeting 2020/11/09 #813

Closed
6 of 18 tasks
t-bast opened this issue Nov 3, 2020 · 6 comments
Closed
6 of 18 tasks

Lightning Specification Meeting 2020/11/09 #813

t-bast opened this issue Nov 3, 2020 · 6 comments

Comments

@t-bast
Copy link
Collaborator

t-bast commented Nov 3, 2020

The meeting will take place on Monday 2020/11/09 at 7pm UTC on IRC #lightning-dev. It is open to the public.

Pull Request Review

Issues

Long Term Updates

Backlog

The following are topics that we should discuss at some point, so if we have time to discuss them great, otherwise they slip to the next meeting.


Post-Meeting notes:

Action items

@t-bast t-bast pinned this issue Nov 3, 2020
@t-bast
Copy link
Collaborator Author

t-bast commented Nov 4, 2020

Added #814 and a field report discussion on the high on-chain fee week we just had. I think it's useful to share what we learned and what can be improved when on-chain fees are that high.

@ariard
Copy link
Contributor

ariard commented Nov 8, 2020

And achieve #805

@t-bast
Copy link
Collaborator Author

t-bast commented Nov 9, 2020

And achieve #805

I think we should always start by going through the action items of the previous meeting, and ensure it's making progress, let's do that tonight!

@manreo
Copy link

manreo commented Nov 12, 2020

logs are not available:
http://www.erisian.com.au/meetbot/lightning-dev/2020/

@t-bast
Copy link
Collaborator Author

t-bast commented Nov 12, 2020

Yes the logging bot is still having issues. I can paste the logs from my session here if it's helpful.

@t-bast
Copy link
Collaborator Author

t-bast commented Nov 12, 2020

<niftynei> #startmeeting
<niftynei> ... awkward
<t-bast> looks like he bot still isn't fixed...
<ariard> yeah I updated #803, notably mentioning you should claim HTLC outputs also with a split-off penalty after a while
<rusty> I will fix, one sec.
<t-bast> it's not you niftynei :)
* ChanServ gives channel operator status to rusty
<niftynei> hahaha
* rusty gives voice to lightningbot
<niftynei> #startmeeting
<rusty> niftynei: I think it's already started, try endmeeting?
<t-bast> ok maybe it's you after all :D
<niftynei> #endmeeting
<niftynei> doesn't the ... starter have to end the meeting?
<niftynei> unclear.
<niftynei> ok let's get started anyway. if meetingbot shows up, we'll loop them in
<niftynei> #topic action items from last meeting
<niftynei> #link https://github.com/lightningnetwork/lightning-rfc/issues/805
<rusty> #startmeeting
<niftynei> the first item was rusty to try out #801, https://github.com/lightningnetwork/lightning-rfc/pull/801
<niftynei> and report success/failure. do you have an update for us rusty?
<rusty> niftynei: yes, he revised and I've acked.  Basically, the `tlvs` keyword is now `tlv_stream` which is clearer.
<t-bast> ACK, I think we can merge that one
<niftynei> cool. 
<niftynei> #action merge #801
<niftynei> moving on. 
<niftynei> ariad was to address nits in #803
<niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/803
* lnd-bot ([email protected]) has joined
<lnd-bot> [lightning-rfc] niftynei pushed 3 commits to master: https://github.com/lightningnetwork/lightning-rfc/compare/5afe7028f4eb...5a86adaa7780
<lnd-bot> lightning-rfc/master 7218822 Corné Plooy: BOLT 4: link to BOLT 1 for tlv_payload format
<lnd-bot> lightning-rfc/master 13520a0 Corné Plooy: tlvs -> tlv_stream subsitution everywhere
<lnd-bot> lightning-rfc/master 5a86ada Corné Plooy: tlvs -> tlv_stream in extract-formats.py
* lnd-bot ([email protected]) has left
* lnd-bot ([email protected]) has joined
<lnd-bot> [lightning-rfc] niftynei merged pull request #801: BOLT 4: link to BOLT 1 for tlv_payload format (master...bolt04_tlv_explanation) https://github.com/lightningnetwork/lightning-rfc/pull/801
<ariard> rusty: main change is updated with your point on HTLC outputs, even I don't think an attacker can steal them because 2nd-level transactions outputs are also revoked, it's still a kind of DoS
* lnd-bot ([email protected]) has left
<niftynei> wow that's noisy. i'll wait to the end to merge anything else then
<t-bast> #803 looks good to me
<rusty> ariard: yes, it was always a bit underspecified.  If you had say 10 HTLC outputs, and tried to use a single penalty tx, your peer could invalidate the tx by using an HTLC tx, repeat 10 times.  But the pinning attack makes it clear sometimes you *have* to do a single tx if it's taking too long, yes.
<rusty> Yes, ack #803
<niftynei> that's two acks on #803
<ariard> well you can still claim _HTLC_ outputs with one big penalty no matter what is doing your cheating counterparty, because worst-case you claim on 2nd-stage
<t-bast> johanth had started reviewing it, what are your thought on the latest version?
<ariard> but if your counterparty maintain the pinning on those HTLC outputs for a while (like 2 weeks mempool default expiration), that's still a timevalue DoS
<lndbot> <johanth> I can look trhough the latest
<niftynei> that sounds like an action item to me.
<lndbot> <johanth> I did start implementing this in lnd, and did not find anything in particular that is not mentioned in the 803, so looks pretty good
<lndbot> <johanth> Yeah, will def look at the latest and give it an ACK (probably)
<niftynei> #action #803 ok to merge, pending ACK from johanth
* Guest81 (3344c770@gateway/web/cgi-irc/kiwiirc.com/ip.51.68.199.112) has joined
<niftynei> we've got two more items from last meeting
<niftynei> "make the forumlation more explicit" on #808
<niftynei> it's got two ACKs since last meeting, but is failing the TravisCI checks
<t-bast> good catch, I think we need to get it to pass the CI and we should be good to go ariard
<ariard> ah I mispelling fixing
<niftynei> if there's no objection, i'd move to have this merged once CI is happy
<t-bast> niftynei: ack
<niftynei> #action merge #808 once CI is green
<niftynei> ok last item.
<rusty> niftynei: ack
<niftynei> add contact details for implementations and details not to report, #772
<niftynei> ACINQ has posted their GPG pubkey and contact info in an issue comment
<ariard> Yes I've not done this yet, on my todo, just want to take time write something clear
<ariard> sorry for delay
<niftynei> we're missing contact details for clightning, LND, rust lightning...
<niftynei> anyone else i'm missing?
<ariard> well for RL, we don't a security disclosure policy _yet_
<t-bast> is there someone from electrum?
<ariard> other contact details have been broadcast during last meeting
<niftynei> ariard, are you missing contact details from anyone?
<roasbeef> what PR is up rn? still 83?
<roasbeef> is the bot working again? 
<t-bast> anyway they should be watching the repo and should react to that and post their details
<niftynei> no the bot is not working.
<ariard> ThomasV: ^
<niftynei> we're going through the action items from #805
<ariard> niftynei: electrum at least
<roasbeef> heh, dangling bot, is TheRealTBast to blame? :p 
<niftynei> it sounds like the action left on this is electrum's contact info
<roasbeef> posted the lnd gpg key to the rpeo 
<roasbeef> as a comment
<t-bast> don't know who that guy is :)
<roasbeef> hehe
<roasbeef> re this we can also start to possilby use the "security" feature/page that github has on each repo now 
<roasbeef> we're planning on using it for lnd once we launch our bug bounty program later this year 
<roasbeef> also this PR is still makred WIP right? could prob use a bit more body than a few bullet points 
<t-bast> roasbeef: good idea, it would be nice to integrate with github's security policy stuff
<ariard> what does this GH security policy consist of ?
<roasbeef> yeh, it has a place for a policy, then you can launch advisories that'll notify ppl 
<niftynei> roasbeef: correct, this is just addressing the action item, which was for collecting contact info
<roasbeef> sec lemmie fish the docs 
<roasbeef> https://docs.github.com/en/free-pro-team@latest/github/managing-security-vulnerabilities/adding-a-security-policy-to-your-repository
<roasbeef> will post in the PR as well 
<niftynei> great. i think we're a bit off topic. if the only contact details ariad is missing are electrum, that seems like the next action item
<roasbeef> actually maybe this one is better https://docs.github.com/en/free-pro-team@latest/github/managing-security-vulnerabilities/about-github-security-advisories 
<ariard> okay I'll have a look on it and integrate it
<niftynei> #action electrum to provide their security contact info
<niftynei> #action ariard to investigate github security feature/page
<niftynei> ok that wraps up "action items from last meeting"
<rusty> brb
<niftynei> next up is the first item on this week's agenda
<niftynei> #topic Clarify relative order of messages after reestablish
<niftynei> #link https://github.com/lightningnetwork/lightning-rfc/pull/810
<roasbeef> yeh so we encountered this a few months ago when we were playing whack-a-mole with force close retranmission bugs in lnd 
<t-bast> The under-specified section was what happened in case both a rev and a sig where due after a restart
<roasbeef> afaict it seems to plug the hole in the example crypt-iq made in the OG issue 
<roasbeef> just made a comment re if we need to care about "backwards compat" or not, thinking no since the old way was just underspecified, and everyone prob had some arbitrary ordering they chose 
<niftynei> nice. yeah
<t-bast> I believe all other cases were already correctly covered by the spec, I'm hoping with that change we should all agree on future channel_reestablish behavior!
<niftynei> iirc c-lightning has already implemented this update/does this (sends in same order as original transmission)
<roasbeef> on our end, to impl this properly, we'll need to persist a bit more data than we do already 
<roasbeef> dunno if y'all noticed, but maybe 2 weeks ro so ago, there was a buncha gossip spam, which ended up creating connection instability (write/read timeouts in sockets) and seemed to trigger this for quite a few ppl 
<t-bast> re backwards-compat, I'm also leaning towards no, it usually lead to channel closure so at least we're closing that gap now
<roasbeef> we identified what was causing it, and will have some more clamps in 0.12 
<rusty> Yes, this is my fault for not updating the spec when I finally nailed this in c-lightning...
<roasbeef> we also created a cross impl testing harness for stuff like this as well 
<roasbeef> it was how we found the CL retranmission of htlcs in non incrementing order 
<roasbeef> and by "cross impl harness" I mean some docker containers that get randomly killed during cross-fire lol
<t-bast> nice. it's a good idea
<roasbeef> I think we're planning on publishing it as well, crypt-iq doesn't seem to be here rn tho, will follow up w/ him on that 
<niftynei> right. ok so concerning this PR, it seems like the outstanding question is backwards compatibility
<ariard> okay #810 is good for RL, we're already doing this
<t-bast> I don't think backwards-compatibility is needed, maybe at the implementation-level it makes sense to think about it but probably not at the spec level IMHO
<rusty> Agreed.
<roasbeef> for us whenever we've fixed issues in this area, we've just rolled out only the new more correct logic, we've seen that it can create some force clsoes when ppl go to bupdate, but better to fix things going forward and also incentivize ppl to upgrade to the latest and greatest 
<t-bast> agreed
<niftynei> ok so i think we just need one more PR ACK, then it should be good to merge
* mauz555 (~mauz555@2a01:e0a:56d:9090:29fe:c4d1:ebb5:612a) has joined
<niftynei> #action merge #810, pending second ACK
<niftynei> if there's nothing else on this PR, we can move on to the Issues section
<t-bast> SGTM
<niftynei> #topic field report on the high on-chain fee week
<niftynei> there's no link for this topic
<niftynei> t-bast did you want to kick off the discussion?
<t-bast> Regarding the past week, very high on-chain fees and full mempool, we've seen interesting impact on LN channels
<t-bast> The most important impact was that our node updated channel to match on-chain feerate
<t-bast> With such a high feerate, every channel that was less than 150k sat was basically unusable
<roasbeef> t-bast: what conf target do y'all use? also this is no anchors i'm geussing right?
<t-bast> Because all the capacity was dedicated to potentially paying the commit tx fee
<niftynei> oh that is interesting
<t-bast> The conclusion is two-fold: anchor outputs will be great, if we can agree on a small feerate that's enough to guarantee relay
<ariard> t-bast: can't you relax the confirmation target when your channel become unusable?
<t-bast> ariard: yes we could, but if your remote doesn't do it at the same time and cares about the difference in estimated feerate, channels will close
<t-bast> and we don't want channels force-closing during these high on-chain fee periods...
<roasbeef> do we need to *agree* on the fee rate tho? 
<niftynei> kind of moves the target on the 'feerate' pot to a 'global feerate fund' though, no? in other words, the size of your CPFP account will need to scale with the feerate changes?
<ariard> t-bast: depend of the update_fee bounds of your counterparty, if it's pretty liberal with lower bound
<roasbeef> in 0.12 for lnd (next release) we plan on enabeling anchor by default (with some added fee leak mitigations) and clamping things on teh sender side 
<t-bast> That's why I think this previous action item is important: "Evaluate historical min_relay_fee to improve update_fee in anchors (@rustyrussell and @TheBlueMatt)"
<t-bast> even with anchor outputs, if you let your remote raise the feerate, there is ariard's attack that becomes possible
<niftynei> we'd need to agree on a feerate if we remove the update_feerate, which is what i think t-bast is getting at
* mauz555 has quit ()
<t-bast> so you currently still need to bound the feerate you accept from your remote, but how? Still undecided...
<roasbeef> t-bast: so we need a ceiling instead? i think we just chose one on our end, but yeah ideally we can communicate this somehow with our feature bits or w/e 
<t-bast> I think you need both, if the feerate is lowered too much and the tx can't be relayed, you're very screwed too
<roasbeef> yep, so something low, but not tooo low
<ariard> if you bound the feerate announced by your remote that's okay but still this bound might be honest in period of high-fee
<niftynei> anecdotally, we had a user who's channel closed because the peer's suggested feerate was below the floor on what c-lightning deemed reasonable
<roasbeef> iirc stuff below 1.3 sat/byte was evicted w/ the mempool fee gradient of last week?
<ariard> other said how do you dissociate between a honest high-fee update_fee and a malicious one
<roasbeef> ariard: what do you mean by "might be honest"? 
<lndbot> <johanth> I was just gonna cap it at 20 sat/b for anchor channels for now, as the initiator
<ariard> roasbeef: I can announce you a high-feerate `update_fee` during mempool-congestion, your node will accept it because it's matching your view of the mempool ?
<niftynei> roasbeef, what do you  mean by "clamping things on the sender side"
* rusty hides in case anyone asks him about progress on historical analysis of mempools... BlueMatt?
<t-bast> johanth: you meant fix it there? And potentially raise it if that becomes unrelayable? I was thinking of something like that too, but not too sure and the fundee behavior
<roasbeef> niftynei: so the sender picks a ceiling for how high a fee rate it'll propose 
<roasbeef> to like cooperate and clamp down on fee leak, but also do not let the fee estimator commit like 80% of the channel to feees 
<ariard> rusty: lol we're playing the ball on this with Matt, no progress from our-side...
<t-bast> rusty: xD. TBH I think we'll start implementing something like this anyway, at least when we're funders...
<roasbeef> t-bast: I think what we realy need here is a clear model of how the low water mark eviction in the bitcoind mempool works 
<niftynei> mmm. iiuc that's gonna cause problems if c-lightning's still using its min fee estimator (and the min goes above the sender's ceiling)
<roasbeef> with that we could then compute a worst case scenario, like entire mempool is 500 sat/byte or something, what ends up being not accepted/evicted? 
<lndbot> <johanth> t-bast: yeah, as the initiator I won’t send update_fee above 20 sat/b (should be enough for relay, also it is configurable)
<lndbot> <johanth> as a receiver I will accept the same way as for other channel types
<niftynei> maybe i'm missing something about the 'sender' role? we haven't updated our min-feerate floor logic for anchor output channels
<ariard> roasbeef: it's the lower package feerate you need to consider, if your whole mempool is 500sat/byte, your package under this mark will be evicted
<t-bast> I think a potential behavior behavior could be: funder uses 3 * min_relay_fee from his mempool (or another small factor) / fundee accepts whatever still passes the relay bar on his own mempool and isn't too high (implementation-dependent, configurable)
<t-bast> very handwavy but would probably work okay-ish in practice *as long as other implementations also do something like this*
<t-bast> this is why I think we need to agree on a rough behavior
<t-bast> to avoid all channels closing when the mempool becomes full like last week :)
<lndbot> <johanth> PRFT(tm)
<lndbot> <johanth> package relay fixes this :slightly_smiling_face:
<roasbeef> t-bast: heh I 'member back in the day when a crazy mempool would cause co-op close negotiations to fail between CL+lnd 
<rusty> t-bast: how do we tell what min relay fee is: does the 'relayfee' var from bitcoin-cli getnetworkinfo change?
<roasbeef> a co-op close revemp is also needed imo, but a discussion for another time... (like how do y'all handle the other party just not budging and refusing to satisfice?) 
<roasbeef> rusty: iirc there's a getmempoolinfo call? 
<ariard> johanth: I want to try pinning scenarios in the wild, before to go forward with package relay
<ariard> if someone wants to play the victim :)
<rusty> roasbeef: ah, thanks!
<lndbot> <johanth> to me it sounds impossible to agree on a “perfect” relay fee (for now), which is why I think we should clamp around 10-20 range
<t-bast> I was planning on trying out getmempoolinfo's `minrelaytxfee` and `mempoolminfee`
<roasbeef> ariard: really interested in that, as imo there's still a bit of a gap between theory and practicality when it comes to them, like can you really pin something for 2 weeks? 
<roasbeef> particulalry if your "attack tree" was dropped from the mewmpol last fee due to being too low fee 
<roasbeef> rusty: yeh according to this rando site https://chainquery.com/bitcoin-cli/getmempoolinfo 
<t-bast> johanth: should it be 10-20 or 10-20 times your node's min_relay_fee?
<ariard> roasbeef: I've detailed write-ups to close the gap between theory and practice, rn I'm reworking RL sample node
<rusty> johanth: I say we should sit on 20 sat/kb all the time, and declare MISSION ACCOMPLISHED.
<lndbot> <johanth> I meant 10-20 sat/b
<ariard> and when it's done I'm plannign to test it with the pinning scenarios
<rusty> oops, sat/b not sat/kb!  so 80 sat/sipa.
<roasbeef> ariard: cool stuff, looking forward to the ultimate findings, personally it's kinda in my "other stuff that's more practical to worry about" bucket, but happy to be proven wrong 
<lndbot> <johanth> rusty: mhm, I also prefer kicking this can a bit longer down the road
<ariard> roasbeef: 2 weeks depends of mempool-congestion and committed feerate chosen by counterparty
<roasbeef> ariard: yeh there's a lot of variables in it really, it isn't a "deterministic" attack/scenario 
<roasbeef> imo
<ariard> roasbeef: yeah will reach out to you when I feel ready to execute it :)
<niftynei> time check: we've got 2min til the hour closes
<ariard> but yeah lot of variables to integrate, can you _predict_ mempool-congestion above channel feerate for more than HTLC delta? this kind of reasoning
<rusty> I really would like to Pick A Number, then we can agree on that for anchors, and eventually remove the update_fee logic altogether... Bwahahah...
<ariard> I would rather pick a common scaling factor * min_relay_fee logic, it's more mempool-congestion fault-tolerant
<t-bast> what about pick 20 sat/b and watch your own mempool, and if it becomes too low to relay you raise it (and fundee agrees on that raise based on its own mempool view as well and the value at risk)?
<niftynei> i'm really not very well versed in feerate/mempool logics, but it seems that picking a static number is ... kind of dangerous as a motivated attacker would have a fixed target
<roasbeef> ariard: but not all nodes have the same "minrelayfee", IIRC, if you change the size of your mempool then this value is affected 
<roasbeef> like some ppl run on rasp pis and constraint their bitcoind mempool accordingly 
<t-bast> it will never be perfect because no global mempool / different mempool size / etc but if you apply a small factor to it, you're likely ok
<rusty> niftynei: but they have to hold above that target for weeks...
<ariard> roasbeef: the "minrelayfee" is static, the mempool min fee is dynamic (IIRC) but you're right we care about the one which is dynamic
<lndbot> <johanth> I don’t think we need to pick a number. As initiator you also have a incentive to keep it low, but relayable.
<t-bast> I'm ok with closing all the channel I have with people running nodes on raspberry pis :D
<lndbot> <johanth> as a receiver, just do as we always do: check it is not insane
<niftynei> mmh i see. that does get expensive. but how expensive? 
<roasbeef> ariard: ah ok I always forget which one is static and which one is dynamic, but then they don't seem compatible right? given stuff that was above this minrelay value wasn't accepted into the mempool last week? 
<roasbeef> as in a node won't relay something it won't accept
<t-bast> aren't you ok as long as you take the max of the two?
<t-bast> (it's on my todo list to experiment with those, haven't done it yet)
<ariard> roasbeef: the relation is minrelayfee > mempool min fee, and it's must but if your package is in-between, it's not accepted
* Guest81 has quit (Quit: Connection closed)
<ariard> and yeah mempool min fee is function of your node settings
<ariard> also your implementation version as policy change may affect which kind of package will be accepted
* ThomasV (~ThomasV@unaffiliated/thomasv) has joined
<ariard> so we might have a chosen feerate scaling on local view of your mempool min fee
<niftynei> (so two weeks of paying for full 2MB blocks of 20s/b feerate txs is 8k btc)
<ariard> and hope for your channel counterparty not having a view too much different
<harding> niftynei: for each 1 sat/vb increase in the min feerate, each block will have to pay an addition 0.04 BTC fee, times 2,016 block in two weeks, means ~80 BTC.  So 20 s/vb means the upper bound an attacker will have to pay is 1,600 BTC (about $2.4 million at present).
<roasbeef> sooo...something something pick sane value for clamps that seem "good enough"? 
<roasbeef> harding: interesante...
<t-bast> harding: interesting, thanks for those numbers
<ariard> roasbeef: yeah "good enough" without too much margin for an attacker to leak fee
<ariard> okay let me write a proposal on this for next meeting
<niftynei> mm my math's off. lol
<roasbeef> so 20 s/vb "feels good"? 
<t-bast> ariard: ack, would be great to have a written proposal we can discuss on
<roasbeef> parameterized by the size of the channel I guess 
* fiatjaf has quit (Ping timeout: 264 seconds)
<ariard> roasbeef: why parameterzied by the size of the channel? because higher-value willingness to spend more in fees
<lndbot> <johanth> I did some fee leak calculations, can comment on the proposal when you have that up ariard
<roasbeef> ariard: yeh like if the chan is 10k btc or something, but prob not something we really need to worry about rn 
<niftynei> what is a "fee leak"?
<roasbeef> a name we coined for the attack ariard desribed where an attacker uses high fees to gain htlc value outside a breach case 
<ariard> johanth: will post on the ml as usually, but great to have absolute fee leaks range based on channel value/chosen feerate/historical mempool congestion
<rusty> Note that if this attack were ongoing, the response would be to increase mempool sizes on lightning-nodes that can, too.
<roasbeef> so they use like 500 sat/vbyte, then siphon off the fees in the second level, and use a smaller value to confirm the htlcs, they've gained that diff in fees and the actual amount they signed off on to confirm 
<ariard> a fee leak is a scenario of punishment escape
<roasbeef> rusty: +1
<lndbot> <johanth> ariard: mhm, at a given feerate and reserve size you can calculate how many HTLCs you can safely accept
<niftynei> we're ten minutes over time
<ariard> rusty: I don't follow you there, how increasing the mempool size prevent your counterparty to announce high-feerate update_fee to leak fees?
<t-bast> rusty: but you still want your txs to propagate through smaller mempools as well if you want to get them to miners
<rusty> ariard: different thread, I'm talking about an attacker increasing mempool entry feerates to try to stop txs from propagating.
<ariard> johanth: yes and you want this given feerate to be fault-tolerance against mempool congestion
<t-bast> rusty: ok I understand your comment then
<rusty> t-bast: if it really hit the fan, I would ask Blockstream to mine them...
<t-bast> rusty: XD
<roasbeef> aight Rusty has our back with Plan C 
<roasbeef> just lemmie know where to dump the transactions 
<lndbot> <johanth> oh, can I have an API?
<roasbeef> friend of a miner, friend of the people xD 
<lndbot> <johanth> `rpc EmwegwncyMine(tx)`
<t-bast> POST https://rusty.ozlabs.org/mine
<ariard> so finally cdecker is working on his LN-tx-relay-infrastructure-for-miners ahah
<roasbeef> blockStreamTakeTheWheel(tx)
<rusty> I think my point is that it's probably easier to respond if and when, than to try to design complexity into the protocol to handle all the cases (along with all the bugs we create along the way)
<niftynei> roasbeef, lmao
<t-bast> Agreed, I think we're nearing a good enough solution
<rusty> Hmm, we could actually abuse keysend for this...
<t-bast> Let's see ariard's write-up and comment on it, we're making progress
<ariard> just congestioning all network mempools to block LN-tx propagation has a defined cost based on default mempool size
<niftynei> ok sounds like we have an action item for this
<lndbot> <johanth> is somebody not running their own miner? muh decentralisation
* roasbeef looks over at the BFL Jalepeño on his desk 
* roasbeef slaps casing, this baby does 5 GH/s 
* sr_gi has quit (Read error: Connection reset by peer)
* sr_gi ([email protected]) has joined
<roasbeef> ok g2g on my end, solid meeting y'all 
<rusty> Yep, thanks everyone!
<t-bast> same for me, thanks everyone and thanks niftynei for chairing!
<niftynei> #action ariard to write up concrete proposal for next meeting
<niftynei> #endmeeting

@t-bast t-bast unpinned this issue Nov 23, 2020
@t-bast t-bast closed this as completed Nov 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants