我想让用户知道他们想要删除的数据有值。
例如:用户想要删除记录“数学”科目,但有学生报名参加该科目,因此用户无法删除记录“数学”。
我想知道c#中的语法。是的我知道如何使用“删除”语法,但我想让用户知道他们想要删除的记录有一些有价值的信息。让用户知道他们即将删除关键数据。我知道这也可以在VB中使用
Private Sub btnDelete_Click(sender As Object, e As EventArgs) Handles btnDelete.Click
If DTTable("SELECT * ", "tblSubjectsEnrolled", " WHERE CtrlNumber ='" & IDNumber & "'").Rows.Count > 0 Then
MessageBox.Show("The subject schedule cannot be deleted, because there are still students enrolled in it!", "Delete!", MessageBoxButtons.OK, MessageBoxIcon.Error)
Else
If MessageBox.Show("The operation cannot be undone! Continue?", "Delete", MessageBoxButtons.YesNo, MessageBoxIcon.Question) = Windows.Forms.DialogResult.Yes Then
RecordRow("DELETE FROM [tblOfferedSubjects] WHERE [CtrlNumber]='" & IDNumber & "'")
btnSearch.PerformClick()
MessageBox.Show("Record Deleted!", "Delete", MessageBoxButtons.OK, MessageBoxIcon.Asterisk)
SelectRecord()
End If
End If
End Sub
Public Sub RecordRow(ByVal sql As String)
Dim con As OleDb.OleDbConnection = New OleDb.OleDbConnection(conStr)
Dim cmd As New OleDb.OleDbCommand
cmd.CommandType = CommandType.Text
cmd.CommandText = sql
cmd.Connection = con
con.Open()
cmd.ExecuteNonQuery()
con.Close()
End Sub
Public Function DTTable(ByVal sql As String, ByVal tName As String, ByVal filter As String) As DataTable
Dim connection As New OleDb.OleDbConnection(conStr)
Dim dataAdapter As New OleDb.OleDbDataAdapter
Dim ds As New DataSet
dataAdapter = New OleDb.OleDbDataAdapter(sql + " from " + tName + filter, connection)
dataAdapter.Fill(ds, tName)
Return ds.Tables(tName)
End Function
我想在该代码中使用类似的东西。我也尝试将代码转换为c#,但我无法获得DTTable()。Rows.Count> 0,因为.Rows不能以void值开头。有人可以帮我这个吗?
我展示了可以帮助您理解问题的整个代码片段。
PS:上面显示的代码不是我的。我只是用它作为这个问题的一个例子。
PPS:
我想限制用户删除整个记录。让我们说在类别中,有子类别。如果用户想要删除具有子类别的类别,则不允许用户删除该类别,除非其中没有任何子类别。
答案 0 :(得分:0)
正如您已经提到的,您知道删除语法,您唯一的问题似乎是显示消息。
它实际上类似于VB中使用的方法。
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._ // not necessary since Spark 1.3
// Create a local StreamingContext with two working thread and batch interval of 1 second.
// The master requires 2 cores to prevent from a starvation scenario.
val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
val ssc = new StreamingContext(conf, Seconds(1))
// Create a DStream that will connect to hostname:port, like localhost:9999
val lines = ssc.socketTextStream("localhost", 9999)
// Split each line into words
val words = lines.flatMap(_.split(" "))
import org.apache.spark.streaming.StreamingContext._ // not necessary since Spark 1.3
// Count each word in each batch
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
// Print the first ten elements of each RDD generated in this DStream to the console
wordCounts.print()
ssc.start() // Start the computation
ssc.awaitTermination() // Wait for the computation to terminate
有关更多详细信息,请参阅: